Difference between revisions of "Scalability"

From Geos-chem
Jump to: navigation, search
(SLURM scheduler)
(SLURM scheduler)
Line 32: Line 32:
 
From the above example:
 
From the above example:
  
   CPU time / wall time = <span style="color:red">1d 03h 35m 33s</span> / <span style="color:green">4h 15m 53s</span>
+
   CPU time / wall time = <span style="color:red">1d 03h 35m 33s</span> / <span style="color:green">4h 15m 53s</span>            
                         = <span style="color:red">99333 s</span>        / <span style="color:green">15333 s</span>     = 6.4784
+
                         = <span style="color:red">99333 s</span>        / <span style="color:green">15333 s</span>   = 6.4784
  
A theoretically ideal job running on 8 CPUs would have a CPU time / wall time ratio of exactly 8.  This in practice is never attained due to file I/O as well as system overhead.  By dividing the ratio of CPU time / wall time computed above by the number of CPUs (8) that were used, you can get an estimate of how efficient your job was, compared to theoretically ideal performance:
+
A theoretically ideal job running on 8 CPUs would have a CPU time / wall time ratio of exactly 8.  This in practice is never attained due to file I/O as well as system overhead.  By dividing the ratio of CPU time / wall time computed above by the number of CPUs that were used (in this example, 8), you can get an estimate of how efficient your job was, compared to ideal performance:
  
 
   % of ideal performance = ( 6.4784 / 8 ) * 100 = 80.9797%
 
   % of ideal performance = ( 6.4784 / 8 ) * 100 = 80.9797%

Revision as of 17:07, 21 December 2015

On this page we describe the scalability calculation used in the 1-month benchmark simulations.

Overview

To calculate how well a run scaled, we use CPU time / wall time. The CPU time and wall time metrics can be obtained by checking the job information from your scheduler. We describe the steps to take to obtain this information and calculate scalability below.

SLURM scheduler

After your job has finished, type:

  sacct -l -j JOBID

The -l option returns the “long” output information for your job. You may also specify which information you would like to obtain for your job. For example:

  sacct -j JOBID --format=JobID,JobName,User,Partition,NNodes,NCPUS,MaxRSS,TotalCPU,Elapsed

From the returned output, note the values for AveCPU and Elapsed. For example:

        JobID    JobName      User  Partition   NNodes      NCPUS     MaxRSS   TotalCPU    Elapsed 
 ------------ ---------- --------- ---------- -------- ---------- ---------- ---------- ---------- 
 53901011     HEMCO+Hen+ ryantosca      jacob        1          8        16? 1-03:35:33   04:15:53 
 53901011.ba+      batch                             1          8   6329196K 1-03:35:33   04:15:53 

Note that there are 2 entries. The first line represents queue to which you submitted the job (i.e. jacob, and the second line represents the internal queue name in which the job actually ran (i.e. batch).

A good measure of how well your job scales across multiple CPUs is the ratio of CPU time to wall-clock time. You can compute this by taking the ratio of the SLURM quantities

  TotalCPU [s] / Elapsed [s]

as reported by the sacct command.

From the above example:

  CPU time / wall time = 1d 03h 35m 33s / 4h 15m 53s             
                       = 99333 s        / 15333 s    = 6.4784

A theoretically ideal job running on 8 CPUs would have a CPU time / wall time ratio of exactly 8. This in practice is never attained due to file I/O as well as system overhead. By dividing the ratio of CPU time / wall time computed above by the number of CPUs that were used (in this example, 8), you can get an estimate of how efficient your job was, compared to ideal performance:

  % of ideal performance = ( 6.4784 / 8 ) * 100 = 80.9797%

--Bob Yantosca (talk) 16:59, 21 December 2015 (UTC)

SGE scheduler

After your job has finished, type:

  qacct -j JOBID

From the returned output, note the values for cpu and ru_wallclock. For example:

  ==============================================================
  qname        bench               
  hostname     titan-10.as.harvard.edu
  group        mpayer              
  owner        mpayer              
  project      NONE                
  department   defaultdepartment   
  jobname      v10-01-public-release-Run1.run
  jobnumber    81969               
  taskid       undefined
  account      sge                 
  priority     0                   
  qsub_time    Thu Jun 18 17:14:15 2015
  start_time   Thu Jun 18 17:14:55 2015
  end_time     Fri Jun 19 01:01:48 2015
  granted_pe   bench               
  slots        8                   
  failed       0    
  exit_status  0                   
  ru_wallclock 28013
  ru_utime     189568.938   
  ru_stime     1718.925     
  ru_maxrss    5941376             
  ru_ixrss     0
  ru_ismrss    0                   
  ru_idrss     0                   
  ru_isrss     0                   
  ru_minflt    5437936             
  ru_majflt    23                  
  ru_nswap     0                   
  ru_inblock   36810536            
  ru_oublock   834224              
  ru_msgsnd    0                   
  ru_msgrcv    0                   
  ru_nsignals  0                   
  ru_nvcsw     390660              
  ru_nivcsw    19093052            
  cpu          191287.863
  mem          1266832.593       
  io           30.376            
  iow          0.000             
  maxvmem      6.817G
  arid         undefined

Calculate the scalability using:

  cputime / ru_wallclock

From the above example:

  Scalability = 191287.863 / 28013 = 6.8285

--Melissa Sulprizio (talk) 22:04, 11 September 2015 (UTC)