Barley info

From FarmShare

Jump to: navigation, search

Follow the FarmShare tutorial or the User Guide

current barley policies

  • 480 max jobs per user ('qconf -sconf | grep max_u_jobs')
  • 3000 max jobs in the system ('qconf -sconf | grep max_jobs')
  • 48hr max runtime for any job in regular queue ('qconf -sq trusty.q | grep h_rt')
  • 7 days max runtime for the long queue ('qconf -sq long.q | grep h_rt')
  • 15min max runtime in test.q ('qconf -sq test.q | grep h_rt')
  • 4GB default mem_free request per slot ('qconf -sc | grep mem_free')

Technical details

  • 19 new machines, AMD Magny Cours 24 cores each, 96GB RAM
  • 1 new machine, AMD Magny Cours 24 cores, 192GB RAM
  • ~450GB local scratch on each
  • ~100TB in /farmshare/user_data shared across all barley and corn systems (introduced summer 2013). Only for data that are actively in use by currently running jobs on the Barley or Corn machines. Not backed up. Usage should be less than 1 TB; older files to be deleted as the file system fills. Data associated with inactive SUNetIDs will be deleted.
  • Open Grid Scheduler 2011.11p1
  • 10GbE interconnect (Juniper QFX3500 switch)

how to use the barley machines

To start using these new machines, you can check out the man page for 'sge_intro' or the 'qhost', 'qstat', 'qsub' and 'qdel' commands.

Initial issues:

  • You are limited in space to your AFS homedir ($HOME) and local scratch disk on each node ($TMPDIR)
  • The execution hosts don't accept interactive jobs, only batch jobs for now.
  • You'll want to make sure you have your Kerberos TGT and your AFS token.

If you want to use the newer bigger storage:

  1. log into any FarmShare machine: ssh sunetid@corn.stanford.edu
  2. cd to /farmshare/user_data/<your username> (or wait 5mins if it doesn't exist yet)
  3. write a job script: "$EDITOR test_job.script"
    1. see 'man qsub' for more info
    2. use environment variable $TMPDIR for local scratch
    3. use /farmshare/user_data/<your username> for shared data directory
  4. submit the job for processing: "qsub -cwd test_job.script"
  5. monitor the jobs with "qstat -f -j JOBID"
    1. see 'man qstat' for more info
  6. check the output files that you specified in your job script (the input and output files must be in /farmshare/user_data/)

Any questions, please email 'farmshare-discuss@lists.stanford.edu' Some good intro usage examples here: http://gridscheduler.sourceforge.net/howto/basic_usage.html

Personal tools
Toolbox
LANGUAGES