
<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://cgi.math.princeton.edu/compudocwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Plazonic</id>
	<title>CompudocWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://cgi.math.princeton.edu/compudocwiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Plazonic"/>
	<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Special:Contributions/Plazonic"/>
	<updated>2026-04-28T01:27:43Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.11</generator>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=User_talk:Plazonic&amp;diff=1912</id>
		<title>User talk:Plazonic</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=User_talk:Plazonic&amp;diff=1912"/>
		<updated>2012-03-02T20:35:36Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: Created page with &amp;quot;test&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;test&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Main_Page&amp;diff=1911</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Main_Page&amp;diff=1911"/>
		<updated>2011-09-02T13:20:51Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Quick Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Quick Links ==&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;2&amp;quot; width=&amp;quot;90%&amp;quot;&lt;br /&gt;
! [http://math.princeton.edu/ssh.html http://cgi.math.princeton.edu/compudocwiki/images/7/74/Terminal.jpg]!![https://www.math.princeton.edu/mail http://cgi.math.princeton.edu/compudocwiki/images/2/24/Email.jpg]&lt;br /&gt;
|-&lt;br /&gt;
! Web SSH http://math.princeton.edu/ssh.shtml &amp;lt;br&amp;gt; Alternate Web SSH http://math.princeton.edu/ssh2.shtml!! Webmail https://webmail.math.princeton.edu/&lt;br /&gt;
|- style=&amp;quot;height:40px&amp;quot;&lt;br /&gt;
! &lt;br /&gt;
|- &lt;br /&gt;
! [https://webmail.math.princeton.edu/passwd/ http://cgi.math.princeton.edu/compudocwiki/images/6/64/Password.jpg] !! [https://webmail.math.princeton.edu/vacation/ http://cgi.math.princeton.edu/compudocwiki/images/3/3a/Vacation.jpg]&lt;br /&gt;
|- &lt;br /&gt;
! Change Password !! Set Vacation Message&lt;br /&gt;
|- &lt;br /&gt;
|+&amp;amp;nbsp;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;BR&amp;gt;&amp;lt;BR&amp;gt;&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Need Help? Contact Math/PACM computing support by e-mail [mailto:compudoc@princeton.edu compudoc@princeton.edu]. ==&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1910</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1910"/>
		<updated>2011-07-28T12:47:07Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using MathJax.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the MathJax package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
MathJax is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about MathJax on [http://www.mathjax.org MathJax homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
  &amp;lt;SCRIPT SRC=&amp;quot;/mathjax/MathJax.js&amp;quot;&amp;gt;&lt;br /&gt;
    MathJax.Hub.Config({&lt;br /&gt;
      extensions: [&amp;quot;tex2jax.js&amp;quot;,&amp;quot;TeX/AMSmath.js&amp;quot;,&amp;quot;TeX/AMSsymbols.js&amp;quot;],&lt;br /&gt;
      jax: [&amp;quot;input/TeX&amp;quot;,&amp;quot;output/HTML-CSS&amp;quot;],&lt;br /&gt;
      tex2jax: {inlineMath: [[&amp;quot;$&amp;quot;,&amp;quot;$&amp;quot;],[&amp;quot;\\(&amp;quot;,&amp;quot;\\)&amp;quot;]],  processEscapes: true}&lt;br /&gt;
    });&lt;br /&gt;
  &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure MathJax is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
== More Information ==&lt;br /&gt;
To see more information on how to tweak and use MathJax please refer to their website, in particular the [http://www.mathjax.org/docs/1.1/configuration.html configuration webpage].&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Help:Contents&amp;diff=1909</id>
		<title>Help:Contents</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Help:Contents&amp;diff=1909"/>
		<updated>2011-01-10T02:05:22Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In order to contact Math/PACM computing support please e-mail [mailto:compudoc@princeton.edu compudoc@princeton.edu] or call 8-0476.&lt;br /&gt;
&lt;br /&gt;
Josko Plazonic&lt;br /&gt;
&lt;br /&gt;
222 Fine Hall&lt;br /&gt;
&lt;br /&gt;
Michael Barone&lt;br /&gt;
&lt;br /&gt;
215 Fine Hall&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1883</id>
		<title>HowTos</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1883"/>
		<updated>2010-08-12T15:03:51Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Printing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find instructions on how to do some of the more common computing tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Certificates ==&lt;br /&gt;
We used to rely on our own unsigned SSL certificates for Math web servers and e-mail but as of recently we have replaced them with [http://certs.ipsca.com/ ipsCA]'s signed certificates.  ipsCA is generously providing high quality free SSL certificates to educational institutions.  &lt;br /&gt;
&lt;br /&gt;
All recent browsers and e-mail clients have appropriate root certificate that can be used to verify identity of our servers.  Therefore no additional importing of certificates should be required. If you encounter any problems with our SSL certificates please let us know (like if your browser or e-mail client cannot recognize or verify our SSL certificate).&lt;br /&gt;
&lt;br /&gt;
== Connect to Math/PACM systems remotely ==&lt;br /&gt;
There are a number of different ways to access Math/PACM systems and services - login servers, computational machines, E-mail, files on file server and others.  Here are some of these ways:&lt;br /&gt;
* [[HowTos:Access your files on Math/PACM file server via cifs/samba|Access your files on Math/PACM file server via cifs/samba on Windows, Mac OS X or Linux]] - directly access your files on the file server, on campus or after connecting via VPN&lt;br /&gt;
* [[HowTos:Connect to login servers via ssh|Connect to login servers via ssh from Windows, Mac OS X or Linux]] (also copy files back and forth by using ssh/scp)&lt;br /&gt;
* [[HowTos:Remote Linux Desktop access|Remote Linux Desktop access]]&lt;br /&gt;
For E-mail reading/access only please read below.&lt;br /&gt;
&lt;br /&gt;
== E-mail access and configuration ==&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird on Math Linux machines|Configure Thunderbird 2.* on Math Linux workstations]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird|Configure Thunderbird 2.* in general]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird 3|Configure Thunderbird 3.* in general]]&lt;br /&gt;
* [[HowTos:Read E-mail with webmail|Read your e-mail in your web browser by using Horde/IMP webmail]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Outlook 2007.* in general|E-mail configuration for Outlook 2007 in general]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Mac OS X.* in general|E-mail configuration for Mac OS X in general]]&lt;br /&gt;
&lt;br /&gt;
== File restore/undelete/backup/snapshots ==&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Linux from home directory on Math file server|How to restore deleted files or previous versions on Linux from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Windows from home directory on Math file server|How to restore deleted files or previous versions on Windows from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Mac OS X from home directory on Math file server|How to restore deleted files or previous versions on Mac OS X from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from backups|How to obtain files from backups]] (for files deleted or changed more than 4 days ago and usually not more than 3-4 months ago)&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
* [[HowTos:Configure MacOSX for Dell W5300n|How to configure your Macintosh for printing with the Dell printers on 11th and 5th floor (W5300n)]]&lt;br /&gt;
* [[HowTos:Configure Mac OS X Printing|How to configure your Macintosh for printing to one of Fine Hall printers]]&lt;br /&gt;
* [[HowTos:Configure Windows Printing for Fine Hall|How to configure your Microsoft Windows computer for printing to public printers in Fine Hall]]&lt;br /&gt;
&lt;br /&gt;
== TeX ==&lt;br /&gt;
* [[HowTos:Install TeX on a Microsoft Windows computer|A quick HowTo about installing TeX on a Microsoft Windows computer]]&lt;br /&gt;
* [[HowTos:Add TeX to your webpage|How to add good looking TeX code to your webpages on Math webserver]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1882</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1882"/>
		<updated>2010-08-05T17:19:00Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 1 mini computational cluster&lt;br /&gt;
&lt;br /&gt;
== NewComp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
NewComp mini computational cluster consists of 4 nodes, each with 2 Xeon X5680 CPUs - 6 cores each, 12 cores total, running at 3.33GHz.  Each node has 96GB of memory - 8 GB/node.  Head node is equipped with one Intel Xeon X5650 CPUs (6 cores total, running at 2.67GHz) and with 12 GB of memory.&lt;br /&gt;
&lt;br /&gt;
Nodes are connected with gigabit ethernet networking as well as 4x Infiniband.  &lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 6.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
Head node /scratch is approximately 3TBs and its subdirectory /scratch/network is exported to all nodes (as /scratch/network). Therefore if you need to access/write temporary data from all nodes create a subdirectory of /scratch/network (like /scratch/network/username) and read/write there.&lt;br /&gt;
&lt;br /&gt;
Nodes also have local /scratch space and their size is approximately 700GB.  This local disk is also quite fast so consider it for fast data writing and reading.  Just like with /scratch/network create /scratch/network/username and read/write from there.  As mentioned above the /scratch/network on these nodes is mounted from the head node and while bigger in size it is also a lot slower then the local disk.&lt;br /&gt;
&lt;br /&gt;
It cannot be emphasize enough that /scratch (and /scratch/network) is for '''temporary''' data storage '''only'''.  Data placed there will occasionally be purged (without notice, oldest first) as needed to ensure all users have enough space.&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time the cluster is open to all Math and PACM members.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to NewComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh newcomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  &lt;br /&gt;
=== Compiling your programs ===&lt;br /&gt;
You should be compiling and preparing your jobs on the head node.  You can setup your environment to use one of available compilers or MPI versions by using module command.  Check [[Documentation_and_Information:Modules|how to use environment modules]].&lt;br /&gt;
&lt;br /&gt;
For MPI use you should probably be using the latest version of OpenMPI as it can take advantage of infiniband interfaces on nodes.&lt;br /&gt;
=== Scheduling/Running Jobs ===&lt;br /&gt;
No jobs/computations, expect maybe very short test runs, should be run on the head node.  Any other jobs will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
All jobs have to be submitted to the scheduler which will take care of assigning the necessary resources and running the job.  Any computations found running without being submitted through the scheduler or that were submitted incorrectly (e.g. if the job consumes more cores then allocated or runs after it was supposed to complete) will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
The scheduler in use on newcomp is torque/maui.  &lt;br /&gt;
&lt;br /&gt;
==== Torque/Maui Queues ====&lt;br /&gt;
The scheduler will automatically place your job in one of the following queues.  Here are their names and their current limits:&lt;br /&gt;
===== Short Length Queue =====&lt;br /&gt;
* 4 hour wall clock limit&lt;br /&gt;
* 48 max processes total (of all users together)&lt;br /&gt;
* 3 nodes max per job&lt;br /&gt;
===== Medium Length Queue =====&lt;br /&gt;
* 4-24 hour wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 2 nodes max per job&lt;br /&gt;
===== Long Length Queue =====&lt;br /&gt;
* 24 hour-7 days wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 12 max processes per user&lt;br /&gt;
==== Job Submission Gotchas ====&lt;br /&gt;
Please take a look at the below examples - you absolutely have to specify how many nodes you need and how many cores/node as well as the wall clock.  Make sure you specify enough time for your job to finish while trying to be close to the actual run time.  The scheduler will use that information to fit your job best and requesting much more time then you actually need might make your jobs wait too long to be scheduled for running.&lt;br /&gt;
==== Submitting Single Core/Serial Jobs ====&lt;br /&gt;
To run a single core program with executable called, say, myprogram compiled with intel 10.1 compiler, you will need to write a job script for torque. Here is a sample command script, serial.cmd, which uses (of course) 1 core:&lt;br /&gt;
&lt;br /&gt;
 cd my_serial_directory&lt;br /&gt;
 cat serial.cmd&lt;br /&gt;
 &lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 # load intel compiler settings before running the program&lt;br /&gt;
 # since we compiled it with intel 10.1&lt;br /&gt;
 module load intel/10.1&lt;br /&gt;
 # go to the directory with the program&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 # and run it&lt;br /&gt;
 ./myprogram&lt;br /&gt;
&lt;br /&gt;
To submit the job to the scheduling system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub serial.cmd&lt;br /&gt;
==== Submitting Parallel Jobs ====&lt;br /&gt;
To run your parallel/MPI processing executable called myparallelprog a job script will need to be created for torque. Here is a sample command script, parallel.cmd, which uses 8 cores total (4 cores per node).&lt;br /&gt;
&lt;br /&gt;
 cd my_mpi_directory&lt;br /&gt;
 cat parallel.cmd&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # parallel job using 2 nodes and 16 CPU cores, and runs&lt;br /&gt;
 # for 4 hours (max).&lt;br /&gt;
 #PBS -l nodes=2:ppn=8,walltime=4:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 module load openmpi&lt;br /&gt;
 cd /u/username/my_mpi_directory&lt;br /&gt;
 numprocs=`wc -l &amp;lt;${PBS_NODEFILE}`&lt;br /&gt;
 mpiexec -np $numprocs ./a.out&lt;br /&gt;
&lt;br /&gt;
To submit the job to the batch system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub parallel.cmd&lt;br /&gt;
==== Submitting Multiple Parametrized Jobs ====&lt;br /&gt;
If you need to submit multiple, say 100, jobs you can submit them with&lt;br /&gt;
 [username@newcomp] qsub -t 1-100 jobscript.cmd&lt;br /&gt;
That will submit 100 jobs and each will be assigned a unique number (from 1 to 100) available in environment variable PBS_ARRAYID.  You can use that environment variable in the jobscript.cmd script, e.g. to process different data sets.  For example the script could be&lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 # and run it&lt;br /&gt;
 ./myprogram $PBS_ARRAYID&lt;br /&gt;
&lt;br /&gt;
==== Useful Scheduler Tools ====&lt;br /&gt;
* showbf - shows how many nodes are available and for how long. The wall clock limit of a job should be less than the duration reported by showbf, otherwise the job will not run before the next scheduled maintenance period.&lt;br /&gt;
* diagnose -p - shows the priority assigned to queued jobs&lt;br /&gt;
* showq or qstat - shows jobs in the queues&lt;br /&gt;
* xpbs - a graphical display of the queues&lt;br /&gt;
* pbstop - a text based view of the cluster nodes (e.g., pbstop -c 1 -m 8 -01234567)&lt;br /&gt;
* qdel - to kill a job&lt;br /&gt;
* qsig -s 0 &amp;lt;jobid&amp;gt; - alternate way to kill a job that will not be removed with qdel&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1881</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1881"/>
		<updated>2010-08-03T14:52:15Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Submitting Multiple Parametrized Jobs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 1 mini computational cluster&lt;br /&gt;
&lt;br /&gt;
== NewComp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
NewComp mini computational cluster consists of 4 nodes, each with 2 Xeon X5680 CPUs - 6 cores each, 12 cores total, running at 3.33GHz.  Each node has 96GB of memory - 8 GB/node.  Head node is equipped with one Intel Xeon X5650 CPUs (6 cores total, running at 2.67GHz) and with 12 GB of memory.&lt;br /&gt;
&lt;br /&gt;
Nodes are connected with gigabit ethernet networking as well as 4x Infiniband.  &lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 6.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
Head node /scratch is approximately 3TBs and its subdirectory /scratch/network is exported to all nodes (as /scratch/network). Therefore if you need to access/write temporary data from all nodes create a subdirectory of /scratch/network (like /scratch/network/username) and read/write there.&lt;br /&gt;
&lt;br /&gt;
Nodes also have local /scratch space and their size is approximately 700GB.  This local disk is also quite fast so consider it for fast data writing and reading.  Just like with /scratch/network create /scratch/network/username and read/write from there.  As mentioned above the /scratch/network on these nodes is mounted from the head node and while bigger in size it is also a lot slower then the local disk.&lt;br /&gt;
&lt;br /&gt;
It cannot be emphasize enough that /scratch (and /scratch/network) is for '''temporary''' data storage '''only'''.  Data placed there will occasionally be purged (without notice, oldest first) as needed to ensure all users have enough space.&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time the cluster is open to all Math and PACM members.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to NewComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh newcomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  &lt;br /&gt;
=== Compiling your programs ===&lt;br /&gt;
You should be compiling and preparing your jobs on the head node.  You can setup your environment to use one of available compilers or MPI versions by using module command.  Check [[Documentation_and_Information:Modules|how to use environment modules]].&lt;br /&gt;
&lt;br /&gt;
For MPI use you should probably be using the latest version of OpenMPI as it can take advantage of infiniband interfaces on nodes.&lt;br /&gt;
=== Scheduling/Running Jobs ===&lt;br /&gt;
No jobs/computations, expect maybe very short test runs, should be run on the head node.  Any other jobs will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
All jobs have to be submitted to the scheduler which will take care of assigning the necessary resources and running the job.  Any computations found running without being submitted through the scheduler or that were submitted incorrectly (e.g. if the job consumes more cores then allocated or runs after it was supposed to complete) will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
The scheduler in use on newcomp is torque/maui.  &lt;br /&gt;
&lt;br /&gt;
==== Torque/Maui Queues ====&lt;br /&gt;
The scheduler will automatically place your job in one of the following queues.  Here are their names and their current limits:&lt;br /&gt;
===== Short Length Queue =====&lt;br /&gt;
* 4 hour wall clock limit&lt;br /&gt;
* 48 max processes total (of all users together)&lt;br /&gt;
* 3 nodes max per job&lt;br /&gt;
===== Medium Length Queue =====&lt;br /&gt;
* 4-24 hour wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 2 nodes max per job&lt;br /&gt;
===== Long Length Queue =====&lt;br /&gt;
* 24 hour-7 days wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 12 max processes per user&lt;br /&gt;
==== Job Submission Gotchas ====&lt;br /&gt;
Please take a look at the below examples - you absolutely have to specify how many nodes you need and how many cores/node as well as the wall clock.  Make sure you specify enough time for your job to finish while trying to be close to the actual run time.  The scheduler will use that information to fit your job best and requesting much more time then you actually need might make your jobs wait too long to be scheduled for running.&lt;br /&gt;
==== Submitting Single Core/Serial Jobs ====&lt;br /&gt;
To run a single core program with executable called, say, myprogram compiled with intel 10.1 compiler, you will need to write a job script for torque. Here is a sample command script, serial.cmd, which uses (of course) 1 core:&lt;br /&gt;
&lt;br /&gt;
 cd my_serial_directory&lt;br /&gt;
 cat serial.cmd&lt;br /&gt;
 &lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 # load intel compiler settings before running the program&lt;br /&gt;
 # since we compiled it with intel 10.1&lt;br /&gt;
 module load intel/10.1&lt;br /&gt;
 # go to the directory with the program&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 # and run it&lt;br /&gt;
 ./myprogram&lt;br /&gt;
&lt;br /&gt;
To submit the job to the scheduling system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub serial.cmd&lt;br /&gt;
==== Submitting Parallel Jobs ====&lt;br /&gt;
To run your parallel/MPI processing executable called myparallelprog a job script will need to be created for torque. Here is a sample command script, parallel.cmd, which uses 8 cores total (4 cores per node).&lt;br /&gt;
&lt;br /&gt;
 cd my_mpi_directory&lt;br /&gt;
 cat parallel.cmd&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # parallel job using 2 nodes and 16 CPU cores, and runs&lt;br /&gt;
 # for 4 hours (max).&lt;br /&gt;
 #PBS -l nodes=2:ppn=8,walltime=4:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 module load openmpi&lt;br /&gt;
 cd /u/username/my_mpi_directory&lt;br /&gt;
 numprocs=`wc -l &amp;lt;${PBS_NODEFILE}`&lt;br /&gt;
 mpiexec -np $numprocs ./a.out&lt;br /&gt;
&lt;br /&gt;
To submit the job to the batch system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub parallel.cmd&lt;br /&gt;
==== Submitting Multiple Parametrized Jobs ====&lt;br /&gt;
If you need to submit multiple, say 100, jobs you can submit them with&lt;br /&gt;
 [username@newcomp] qsub -t 1-100 jobscript.cmd&lt;br /&gt;
That will submit 100 jobs and each will be assigned a unique number (from 1 to 100) available in environment variable PBS_ARRAYID.  You can use that environment variable in the jobscript.cmd script, e.g. to process different data sets.  For example the script could be&lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 # and run it&lt;br /&gt;
 ./myprogram $PBS_ARRAY_ID&lt;br /&gt;
&lt;br /&gt;
==== Useful Scheduler Tools ====&lt;br /&gt;
* showbf - shows how many nodes are available and for how long. The wall clock limit of a job should be less than the duration reported by showbf, otherwise the job will not run before the next scheduled maintenance period.&lt;br /&gt;
* diagnose -p - shows the priority assigned to queued jobs&lt;br /&gt;
* showq or qstat - shows jobs in the queues&lt;br /&gt;
* xpbs - a graphical display of the queues&lt;br /&gt;
* pbstop - a text based view of the cluster nodes (e.g., pbstop -c 1 -m 8 -01234567)&lt;br /&gt;
* qdel - to kill a job&lt;br /&gt;
* qsig -s 0 &amp;lt;jobid&amp;gt; - alternate way to kill a job that will not be removed with qdel&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1880</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1880"/>
		<updated>2010-08-03T14:51:41Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 1 mini computational cluster&lt;br /&gt;
&lt;br /&gt;
== NewComp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
NewComp mini computational cluster consists of 4 nodes, each with 2 Xeon X5680 CPUs - 6 cores each, 12 cores total, running at 3.33GHz.  Each node has 96GB of memory - 8 GB/node.  Head node is equipped with one Intel Xeon X5650 CPUs (6 cores total, running at 2.67GHz) and with 12 GB of memory.&lt;br /&gt;
&lt;br /&gt;
Nodes are connected with gigabit ethernet networking as well as 4x Infiniband.  &lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 6.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
Head node /scratch is approximately 3TBs and its subdirectory /scratch/network is exported to all nodes (as /scratch/network). Therefore if you need to access/write temporary data from all nodes create a subdirectory of /scratch/network (like /scratch/network/username) and read/write there.&lt;br /&gt;
&lt;br /&gt;
Nodes also have local /scratch space and their size is approximately 700GB.  This local disk is also quite fast so consider it for fast data writing and reading.  Just like with /scratch/network create /scratch/network/username and read/write from there.  As mentioned above the /scratch/network on these nodes is mounted from the head node and while bigger in size it is also a lot slower then the local disk.&lt;br /&gt;
&lt;br /&gt;
It cannot be emphasize enough that /scratch (and /scratch/network) is for '''temporary''' data storage '''only'''.  Data placed there will occasionally be purged (without notice, oldest first) as needed to ensure all users have enough space.&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time the cluster is open to all Math and PACM members.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to NewComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh newcomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  &lt;br /&gt;
=== Compiling your programs ===&lt;br /&gt;
You should be compiling and preparing your jobs on the head node.  You can setup your environment to use one of available compilers or MPI versions by using module command.  Check [[Documentation_and_Information:Modules|how to use environment modules]].&lt;br /&gt;
&lt;br /&gt;
For MPI use you should probably be using the latest version of OpenMPI as it can take advantage of infiniband interfaces on nodes.&lt;br /&gt;
=== Scheduling/Running Jobs ===&lt;br /&gt;
No jobs/computations, expect maybe very short test runs, should be run on the head node.  Any other jobs will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
All jobs have to be submitted to the scheduler which will take care of assigning the necessary resources and running the job.  Any computations found running without being submitted through the scheduler or that were submitted incorrectly (e.g. if the job consumes more cores then allocated or runs after it was supposed to complete) will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
The scheduler in use on newcomp is torque/maui.  &lt;br /&gt;
&lt;br /&gt;
==== Torque/Maui Queues ====&lt;br /&gt;
The scheduler will automatically place your job in one of the following queues.  Here are their names and their current limits:&lt;br /&gt;
===== Short Length Queue =====&lt;br /&gt;
* 4 hour wall clock limit&lt;br /&gt;
* 48 max processes total (of all users together)&lt;br /&gt;
* 3 nodes max per job&lt;br /&gt;
===== Medium Length Queue =====&lt;br /&gt;
* 4-24 hour wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 2 nodes max per job&lt;br /&gt;
===== Long Length Queue =====&lt;br /&gt;
* 24 hour-7 days wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 12 max processes per user&lt;br /&gt;
==== Job Submission Gotchas ====&lt;br /&gt;
Please take a look at the below examples - you absolutely have to specify how many nodes you need and how many cores/node as well as the wall clock.  Make sure you specify enough time for your job to finish while trying to be close to the actual run time.  The scheduler will use that information to fit your job best and requesting much more time then you actually need might make your jobs wait too long to be scheduled for running.&lt;br /&gt;
==== Submitting Single Core/Serial Jobs ====&lt;br /&gt;
To run a single core program with executable called, say, myprogram compiled with intel 10.1 compiler, you will need to write a job script for torque. Here is a sample command script, serial.cmd, which uses (of course) 1 core:&lt;br /&gt;
&lt;br /&gt;
 cd my_serial_directory&lt;br /&gt;
 cat serial.cmd&lt;br /&gt;
 &lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 # load intel compiler settings before running the program&lt;br /&gt;
 # since we compiled it with intel 10.1&lt;br /&gt;
 module load intel/10.1&lt;br /&gt;
 # go to the directory with the program&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 # and run it&lt;br /&gt;
 ./myprogram&lt;br /&gt;
&lt;br /&gt;
To submit the job to the scheduling system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub serial.cmd&lt;br /&gt;
==== Submitting Parallel Jobs ====&lt;br /&gt;
To run your parallel/MPI processing executable called myparallelprog a job script will need to be created for torque. Here is a sample command script, parallel.cmd, which uses 8 cores total (4 cores per node).&lt;br /&gt;
&lt;br /&gt;
 cd my_mpi_directory&lt;br /&gt;
 cat parallel.cmd&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # parallel job using 2 nodes and 16 CPU cores, and runs&lt;br /&gt;
 # for 4 hours (max).&lt;br /&gt;
 #PBS -l nodes=2:ppn=8,walltime=4:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 module load openmpi&lt;br /&gt;
 cd /u/username/my_mpi_directory&lt;br /&gt;
 numprocs=`wc -l &amp;lt;${PBS_NODEFILE}`&lt;br /&gt;
 mpiexec -np $numprocs ./a.out&lt;br /&gt;
&lt;br /&gt;
To submit the job to the batch system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub parallel.cmd&lt;br /&gt;
==== Submitting Multiple Parametrized Jobs ====&lt;br /&gt;
If you need to submit multiple, say 100, jobs you can submit them with&lt;br /&gt;
 [username@newcomp] qsub -t 1-100 jobscript.cmd&lt;br /&gt;
That will submit 100 jobs and each will be assigned a unique number (from 1 to 100) available in environment variable PBS_ARRAYID.  You can use that environment variable in the jobscript.cmd script, e.g. to process different data sets.  For example the script could be&lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 # and run it&lt;br /&gt;
 ./myprogram $PBS_ARRAY_ID&lt;br /&gt;
 ==== Useful Scheduler Tools ====&lt;br /&gt;
* showbf - shows how many nodes are available and for how long. The wall clock limit of a job should be less than the duration reported by showbf, otherwise the job will not run before the next scheduled maintenance period.&lt;br /&gt;
* diagnose -p - shows the priority assigned to queued jobs&lt;br /&gt;
* showq or qstat - shows jobs in the queues&lt;br /&gt;
* xpbs - a graphical display of the queues&lt;br /&gt;
* pbstop - a text based view of the cluster nodes (e.g., pbstop -c 1 -m 8 -01234567)&lt;br /&gt;
* qdel - to kill a job&lt;br /&gt;
* qsig -s 0 &amp;lt;jobid&amp;gt; - alternate way to kill a job that will not be removed with qdel&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1879</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1879"/>
		<updated>2010-07-26T15:52:48Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 1 mini computational cluster&lt;br /&gt;
&lt;br /&gt;
== NewComp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
NewComp mini computational cluster consists of 4 nodes, each with 2 Xeon X5680 CPUs - 6 cores each, 12 cores total, running at 3.33GHz.  Each node has 96GB of memory - 8 GB/node.  Head node is equipped with one Intel Xeon X5650 CPUs (6 cores total, running at 2.67GHz) and with 12 GB of memory.&lt;br /&gt;
&lt;br /&gt;
Nodes are connected with gigabit ethernet networking as well as 4x Infiniband.  &lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 6.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
Head node /scratch is approximately 3TBs and its subdirectory /scratch/network is exported to all nodes (as /scratch/network). Therefore if you need to access/write temporary data from all nodes create a subdirectory of /scratch/network (like /scratch/network/username) and read/write there.&lt;br /&gt;
&lt;br /&gt;
Nodes also have local /scratch space and their size is approximately 700GB.  This local disk is also quite fast so consider it for fast data writing and reading.  Just like with /scratch/network create /scratch/network/username and read/write from there.  As mentioned above the /scratch/network on these nodes is mounted from the head node and while bigger in size it is also a lot slower then the local disk.&lt;br /&gt;
&lt;br /&gt;
It cannot be emphasize enough that /scratch (and /scratch/network) is for '''temporary''' data storage '''only'''.  Data placed there will occasionally be purged (without notice, oldest first) as needed to ensure all users have enough space.&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time the cluster is open to all Math and PACM members.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to NewComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh newcomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  &lt;br /&gt;
=== Compiling your programs ===&lt;br /&gt;
You should be compiling and preparing your jobs on the head node.  You can setup your environment to use one of available compilers or MPI versions by using module command.  Check [[Documentation_and_Information:Modules|how to use environment modules]].&lt;br /&gt;
&lt;br /&gt;
For MPI use you should probably be using the latest version of OpenMPI as it can take advantage of infiniband interfaces on nodes.&lt;br /&gt;
=== Scheduling/Running Jobs ===&lt;br /&gt;
No jobs/computations, expect maybe very short test runs, should be run on the head node.  Any other jobs will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
All jobs have to be submitted to the scheduler which will take care of assigning the necessary resources and running the job.  Any computations found running without being submitted through the scheduler or that were submitted incorrectly (e.g. if the job consumes more cores then allocated or runs after it was supposed to complete) will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
The scheduler in use on newcomp is torque/maui.  &lt;br /&gt;
&lt;br /&gt;
==== Torque/Maui Queues ====&lt;br /&gt;
The scheduler will automatically place your job in one of the following queues.  Here are their names and their current limits:&lt;br /&gt;
===== Short Length Queue =====&lt;br /&gt;
* 4 hour wall clock limit&lt;br /&gt;
* 48 max processes total (of all users together)&lt;br /&gt;
* 3 nodes max per job&lt;br /&gt;
===== Medium Length Queue =====&lt;br /&gt;
* 4-24 hour wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 2 nodes max per job&lt;br /&gt;
===== Long Length Queue =====&lt;br /&gt;
* 24 hour-7 days wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 12 max processes per user&lt;br /&gt;
==== Job Submission Gotchas ====&lt;br /&gt;
Please take a look at the below examples - you absolutely have to specify how many nodes you need and how many cores/node as well as the wall clock.  Make sure you specify enough time for your job to finish while trying to be close to the actual run time.  The scheduler will use that information to fit your job best and requesting much more time then you actually need might make your jobs wait too long to be scheduled for running.&lt;br /&gt;
==== Submitting Single Core/Serial Jobs ====&lt;br /&gt;
To run a single core program with executable called, say, myprogram compiled with intel 10.1 compiler, you will need to write a job script for torque. Here is a sample command script, serial.cmd, which uses (of course) 1 core:&lt;br /&gt;
&lt;br /&gt;
 cd my_serial_directory&lt;br /&gt;
 cat serial.cmd&lt;br /&gt;
 &lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 # load intel compiler settings before running the program&lt;br /&gt;
 # since we compiled it with intel 10.1&lt;br /&gt;
 module load intel/10.1&lt;br /&gt;
 # go to the directory with the program&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 # and run it&lt;br /&gt;
 ./myprogram&lt;br /&gt;
&lt;br /&gt;
To submit the job to the scheduling system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub serial.cmd&lt;br /&gt;
==== Submitting Parallel Jobs ====&lt;br /&gt;
To run your parallel/MPI processing executable called myparallelprog a job script will need to be created for torque. Here is a sample command script, parallel.cmd, which uses 8 cores total (4 cores per node).&lt;br /&gt;
&lt;br /&gt;
 cd my_mpi_directory&lt;br /&gt;
 cat parallel.cmd&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # parallel job using 2 nodes and 16 CPU cores, and runs&lt;br /&gt;
 # for 4 hours (max).&lt;br /&gt;
 #PBS -l nodes=2:ppn=8,walltime=4:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 module load openmpi&lt;br /&gt;
 cd /u/username/my_mpi_directory&lt;br /&gt;
 numprocs=`wc -l &amp;lt;${PBS_NODEFILE}`&lt;br /&gt;
 mpiexec -np $numprocs ./a.out&lt;br /&gt;
&lt;br /&gt;
To submit the job to the batch system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub parallel.cmd&lt;br /&gt;
==== Useful Scheduler Tools ====&lt;br /&gt;
* showbf - shows how many nodes are available and for how long. The wall clock limit of a job should be less than the duration reported by showbf, otherwise the job will not run before the next scheduled maintenance period.&lt;br /&gt;
* diagnose -p - shows the priority assigned to queued jobs&lt;br /&gt;
* showq or qstat - shows jobs in the queues&lt;br /&gt;
* xpbs - a graphical display of the queues&lt;br /&gt;
* pbstop - a text based view of the cluster nodes (e.g., pbstop -c 1 -m 8 -01234567)&lt;br /&gt;
* qdel - to kill a job&lt;br /&gt;
* qsig -s 0 &amp;lt;jobid&amp;gt; - alternate way to kill a job that will not be removed with qdel&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1878</id>
		<title>Documentation and Information</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1878"/>
		<updated>2010-07-26T15:47:26Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Computational Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find documentation and information relevant to computing systems at Fine Hall.  If you are looking for specific instructions on how to perform certain tasks then you are more likely to find what you are looking for on [[HowTos]] and [[Frequently_Asked_Questions|Frequently Asked Questions]] pages.&lt;br /&gt;
&lt;br /&gt;
== Introductions ==&lt;br /&gt;
* [[Documentation_and_Information:Getting started with Linux|Getting started with Linux command line]]&lt;br /&gt;
&lt;br /&gt;
== Computational Resources ==&lt;br /&gt;
* [[Documentation_and_Information:Computational clusters in Fine Hall|Computational clusters in Fine Hall]]&lt;br /&gt;
* [[Documentation_and_Information:Computationally related software|Computationally related software]]&lt;br /&gt;
* [[Documentation_and_Information:Modules|How to use environment modules]]&lt;br /&gt;
&lt;br /&gt;
== Printers ==&lt;br /&gt;
* [[Documentation_and_Information:Public printers|Publicly accessible printers]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1877</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1877"/>
		<updated>2010-07-26T15:43:42Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 1 mini computational cluster&lt;br /&gt;
&lt;br /&gt;
== NewComp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
NewComp mini computational cluster consists of 4 nodes, each with 2 Xeon X5680 CPUs - 6 cores each, 12 cores total, running at 3.33GHz.  Each node has 96GB of memory - 8 GB/node.  Head node is equipped with one Intel Xeon X5650 CPUs (6 cores total, running at 2.67GHz) and with 12 GB of memory.&lt;br /&gt;
&lt;br /&gt;
Nodes are connected with gigabit ethernet networking as well as 4x Infiniband.  &lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 6.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
Head node /scratch is approximately 3TBs and its subdirectory /scratch/network is exported to all nodes (as /scratch/network). Therefore if you need to access/write temporary data from all nodes create a subdirectory of /scratch/network (like /scratch/network/username) and read/write there.&lt;br /&gt;
&lt;br /&gt;
Nodes also have local /scratch space and their size is approximately 700GB.  This local disk is also quite fast so consider it for fast data writing and reading.  Just like with /scratch/network create /scratch/network/username and read/write from there.  As mentioned above the /scratch/network on these nodes is mounted from the head node and while bigger in size it is also a lot slower then the local disk.&lt;br /&gt;
&lt;br /&gt;
It cannot be emphasize enough that /scratch (and /scratch/network) is for '''temporary''' data storage '''only'''.  Data placed there will occasionally be purged (without notice, oldest first) as needed to ensure all users have enough space.&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time the cluster is open to all Math and PACM members.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to NewComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh newcomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  &lt;br /&gt;
&lt;br /&gt;
=== Scheduling/Running Jobs ===&lt;br /&gt;
No jobs/computations, expect maybe very short test runs, should be run on the head node.  Any other jobs will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
All jobs have to be submitted to the scheduler which will take care of assigning the necessary resources and running the job.  Any computations found running without being submitted through the scheduler or that were submitted incorrectly (e.g. if the job consumes more cores then allocated or runs after it was supposed to complete) will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
The scheduler in use on newcomp is torque/maui.  &lt;br /&gt;
&lt;br /&gt;
==== Torque/Maui Queues ====&lt;br /&gt;
The scheduler will automatically place your job in one of the following queues.  Here are their names and their current limits:&lt;br /&gt;
===== Short Length Queue =====&lt;br /&gt;
* 4 hour wall clock limit&lt;br /&gt;
* 48 max processes total (of all users together)&lt;br /&gt;
* 3 nodes max per job&lt;br /&gt;
===== Medium Length Queue =====&lt;br /&gt;
* 4-24 hour wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 2 nodes max per job&lt;br /&gt;
===== Long Length Queue =====&lt;br /&gt;
* 24 hour-7 days wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 12 max processes per user&lt;br /&gt;
==== Job Submission Gotchas ====&lt;br /&gt;
Please take a look at the below examples - you absolutely have to specify how many nodes you need and how many cores/node as well as the wall clock.  Make sure you specify enough time for your job to finish while trying to be close to the actual run time.  The scheduler will use that information to fit your job best and requesting much more time then you actually need might make your jobs wait too long to be scheduled for running.&lt;br /&gt;
==== Submitting Single Core/Serial Jobs ====&lt;br /&gt;
To run a single core program with executable called, say, myprogram, you will need to write a job script for torque. Here is a sample command script, serial.cmd, which uses (of course) 1 core:&lt;br /&gt;
&lt;br /&gt;
 cd my_serial_directory&lt;br /&gt;
 cat serial.cmd&lt;br /&gt;
 &lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 ./myprogram&lt;br /&gt;
&lt;br /&gt;
To submit the job to the scheduling system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub serial.cmd&lt;br /&gt;
==== Submitting Parallel Jobs ====&lt;br /&gt;
To run your parallel/MPI processing executable called myparallelprog a job script will need to be created for torque. Here is a sample command script, parallel.cmd, which uses 8 cores total (4 cores per node).&lt;br /&gt;
&lt;br /&gt;
 cd my_mpi_directory&lt;br /&gt;
 cat parallel.cmd&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # parallel job using 2 nodes and 16 CPU cores, and runs&lt;br /&gt;
 # for 4 hours (max).&lt;br /&gt;
 #PBS -l nodes=2:ppn=8,walltime=4:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 module load openmpi&lt;br /&gt;
 cd /u/username/my_mpi_directory&lt;br /&gt;
 numprocs=`wc -l &amp;lt;${PBS_NODEFILE}`&lt;br /&gt;
 mpiexec -np $numprocs ./a.out&lt;br /&gt;
&lt;br /&gt;
To submit the job to the batch system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub parallel.cmd&lt;br /&gt;
==== Useful Scheduler Tools ====&lt;br /&gt;
* showbf - shows how many nodes are available and for how long. The wall clock limit of a job should be less than the duration reported by showbf, otherwise the job will not run before the next scheduled maintenance period.&lt;br /&gt;
* diagnose -p - shows the priority assigned to queued jobs&lt;br /&gt;
* showq or qstat - shows jobs in the queues&lt;br /&gt;
* xpbs - a graphical display of the queues&lt;br /&gt;
* pbstop - a text based view of the cluster nodes (e.g., pbstop -c 1 -m 8 -01234567)&lt;br /&gt;
* qdel - to kill a job&lt;br /&gt;
* qsig -s 0 &amp;lt;jobid&amp;gt; - alternate way to kill a job that will not be removed with qdel&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1876</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1876"/>
		<updated>2010-07-26T15:34:41Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 1 mini computational cluster&lt;br /&gt;
&lt;br /&gt;
== NewComp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
NewComp mini computational cluster consists of 4 nodes, each with 2 Xeon X5680 CPUs - 6 cores each, 12 cores total, running at 3.33GHz.  Each node has 96GB of memory - 8 GB/node.  Head node is equipped with one Intel Xeon X5650 CPUs (6 cores total, running at 2.67GHz) and with 12 GB of memory.&lt;br /&gt;
&lt;br /&gt;
Nodes are connected with gigabit ethernet networking as well as 4x Infiniband.  &lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 6.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
Head node /scratch is approximately 3TBs and its subdirectory /scratch/network is exported to all nodes (as /scratch/network). Therefore if you need to access/write temporary data from all nodes create a subdirectory of /scratch/network (like /scratch/network/username) and read/write there.&lt;br /&gt;
&lt;br /&gt;
Nodes also have local /scratch space and their size is approximately 700GB.  This local disk is also quite fast so consider it for fast data writing and reading.  Just like with /scratch/network create /scratch/network/username and read/write from there.  As mentioned above the /scratch/network on these nodes is mounted from the head node and while bigger in size it is also a lot slower then the local disk.&lt;br /&gt;
&lt;br /&gt;
It cannot be emphasize enough that /scratch (and /scratch/network) is for '''temporary''' data storage '''only'''.  Data placed there will occasionally be purged (without notice, oldest first) as needed to ensure all users have enough space.&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time the cluster is open to all Math and PACM members.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to NewComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh newcomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  &lt;br /&gt;
&lt;br /&gt;
=== Scheduling/Running Jobs ===&lt;br /&gt;
No jobs/computations, expect maybe very short test runs, should be run on the head node.  Any other jobs will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
All jobs have to be submitted to the scheduler which will take care of assigning the necessary resources and running the job.  Any computations found running without being submitted through the scheduler or that were submitted incorrectly (e.g. if the job consumes more cores then allocated or runs after it was supposed to complete) will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
The scheduler in use on newcomp is torque/maui.  &lt;br /&gt;
&lt;br /&gt;
==== Torque/Maui Queues ====&lt;br /&gt;
The scheduler will automatically place your job in one of the following queues.  Here are their names and their current limits:&lt;br /&gt;
===== Short Length Queue =====&lt;br /&gt;
* 4 hour wall clock limit&lt;br /&gt;
* 48 max processes total (of all users together)&lt;br /&gt;
* 3 nodes max per job&lt;br /&gt;
===== Medium Length Queue =====&lt;br /&gt;
* 4-24 hour wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 2 nodes max per job&lt;br /&gt;
===== Long Length Queue =====&lt;br /&gt;
* 24 hour-7 days wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 12 max processes per user&lt;br /&gt;
&lt;br /&gt;
==== Submitting Single Core/Serial Jobs ====&lt;br /&gt;
To run a single core program with executable called, say, myprogram, you will need to write a job script for torque. Here is a sample command script, serial.cmd, which uses (of course) 1 core:&lt;br /&gt;
&lt;br /&gt;
 cd my_serial_directory&lt;br /&gt;
 cat serial.cmd&lt;br /&gt;
 &lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 ./myprogram&lt;br /&gt;
&lt;br /&gt;
To submit the job to the scheduling system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub serial.cmd&lt;br /&gt;
==== Submitting Parallel Jobs ====&lt;br /&gt;
To run your parallel/MPI processing executable called myparallelprog a job script will need to be created for torque. Here is a sample command script, parallel.cmd, which uses 8 cores total (4 cores per node).&lt;br /&gt;
&lt;br /&gt;
 cd my_mpi_directory&lt;br /&gt;
 cat parallel.cmd&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # parallel job using 2 nodes and 16 CPU cores, and runs&lt;br /&gt;
 # for 4 hours (max).&lt;br /&gt;
 #PBS -l nodes=2:ppn=8,walltime=4:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 module load openmpi&lt;br /&gt;
 cd /u/username/my_mpi_directory&lt;br /&gt;
 numprocs=`wc -l &amp;lt;${PBS_NODEFILE}`&lt;br /&gt;
 mpiexec -np $numprocs ./a.out&lt;br /&gt;
&lt;br /&gt;
To submit the job to the batch system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub parallel.cmd&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1875</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1875"/>
		<updated>2010-07-26T15:32:42Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 1 mini computational cluster&lt;br /&gt;
&lt;br /&gt;
== NewComp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
NewComp mini computational cluster consists of 4 nodes, each with 2 Xeon X5680 CPUs - 6 cores each, 12 cores total, running at 3.33GHz.  Each node has 96GB of memory - 8 GB/node.  Head node is equipped with one Intel Xeon X5650 CPUs (6 cores total, running at 2.67GHz) and with 12 GB of memory.&lt;br /&gt;
&lt;br /&gt;
Nodes are connected with gigabit ethernet networking as well as 4x Infiniband.  &lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 6.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
Head node /scratch is approximately 3TBs and its subdirectory /scratch/network is exported to all nodes (as /scratch/network). Therefore if you need to access/write temporary data from all nodes create a subdirectory of /scratch/network (like /scratch/network/username) and read/write there.&lt;br /&gt;
&lt;br /&gt;
Nodes also have local /scratch space and their size is approximately 700GB.  This local disk is also quite fast so consider it for fast data writing and reading.  Just like with /scratch/network create /scratch/network/username and read/write from there.  As mentioned above the /scratch/network on these nodes is mounted from the head node and while bigger in size it is also a lot slower then the local disk.&lt;br /&gt;
&lt;br /&gt;
It cannot be emphasize enough that /scratch (and /scratch/network) is for '''temporary''' data storage '''only'''.  Data placed there will occasionally be purged (without notice, oldest first) as needed to ensure all users have enough space.&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time the cluster is open to all Math and PACM members.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to NewComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh newcomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  &lt;br /&gt;
&lt;br /&gt;
=== Scheduling/Running Jobs ===&lt;br /&gt;
No jobs/computations, expect maybe very short test runs, should be run on the head node.  Any other jobs will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
All jobs have to be submitted to the scheduler which will take care of assigning the necessary resources and running the job.  Any computations found running without being submitted through the scheduler or that were submitted incorrectly (e.g. if the job consumes more cores then allocated or runs after it was supposed to complete) will be terminated without prior notice.&lt;br /&gt;
&lt;br /&gt;
The scheduler in use on newcomp is torque/maui.  &lt;br /&gt;
&lt;br /&gt;
==== Torque/Maui Queues ====&lt;br /&gt;
The scheduler will automatically place your job in one of the following queues.  Here are their names and their current limits:&lt;br /&gt;
===== Short Length Queue =====&lt;br /&gt;
* 4 hour wall clock limit&lt;br /&gt;
* 48 max processes total (of all users together)&lt;br /&gt;
* 3 nodes max per job&lt;br /&gt;
===== Medium Length Queue =====&lt;br /&gt;
* 4-24 hour wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 2 nodes max per job&lt;br /&gt;
===== Long Length Queue =====&lt;br /&gt;
* 24 hour-7 days wall clock limit&lt;br /&gt;
* 24 max processes total (of all users together)&lt;br /&gt;
* 12 max processes per user&lt;br /&gt;
&lt;br /&gt;
==== Submitting Single Core/Serial Jobs ====&lt;br /&gt;
To run a single core program with executable called, say, myprogram, you will need to write a job script for torque. Here is a sample command script, serial.cmd, which uses (of course) 1 core:&lt;br /&gt;
&lt;br /&gt;
 cd my_serial_directory&lt;br /&gt;
 cat serial.cmd&lt;br /&gt;
 &lt;br /&gt;
 # serial job using 1 node and 1 processor, and runs&lt;br /&gt;
 # for 3 hours (max).&lt;br /&gt;
 #PBS -l nodes=1:ppn=1,walltime=3:00:00&lt;br /&gt;
 #&lt;br /&gt;
 # sends mail if the process aborts, when it begins, and&lt;br /&gt;
 # when it ends (abe)&lt;br /&gt;
 #PBS -m abe&lt;br /&gt;
 #&lt;br /&gt;
 cd $HOME/my_serial_directory&lt;br /&gt;
 ./myprogram&lt;br /&gt;
&lt;br /&gt;
To submit the job to the scheduling system, use:&lt;br /&gt;
&lt;br /&gt;
 qsub serial.cmd&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Configure_Windows_Printing_for_Fine_Hall&amp;diff=1820</id>
		<title>HowTos:Configure Windows Printing for Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Configure_Windows_Printing_for_Fine_Hall&amp;diff=1820"/>
		<updated>2010-03-19T14:54:47Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In order to print to one of the public printers in Fine Hall from a Windows computer (for example your laptop) you can use the windows print server '''&amp;lt;tt&amp;gt;printserver.math.princeton.edu&amp;lt;/tt&amp;gt;'''. Note that you can access and use Fine Hall printers only within Fine Hall. &lt;br /&gt;
&lt;br /&gt;
These are publicly available printers:&lt;br /&gt;
{|&lt;br /&gt;
| '''Windows printer name'''&lt;br /&gt;
| '''Printer location'''&lt;br /&gt;
| '''Printer type'''&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine205&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 205 Fine Hall&lt;br /&gt;
| HP LaserJet 4250 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine219&amp;lt;/tt&amp;gt;''&lt;br /&gt;
|  219 Fine Hall Cluster&lt;br /&gt;
|  Dell Duplex 5330DN&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine305&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 305 Fine Hall (restricted access outside business hours)&lt;br /&gt;
| HP LaserJet 4350 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine511&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 5th floor Fine Hall, outside offices 504 and 505&lt;br /&gt;
| Dell W5300 Duplex printer&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine811&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 8th floor Fine Hall, outside offices 804 and 805&lt;br /&gt;
| HP LaserJet 4300 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine1111&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 11th floor Fine Hall, outside offices 1104 and 1105&lt;br /&gt;
| Dell W5300 Duplex printer&lt;br /&gt;
|}&lt;br /&gt;
While you may find other printers on &amp;lt;tt&amp;gt;finehallprint&amp;lt;/tt&amp;gt; they are private printers reserved for use by their owners so please do not try to use them.&lt;br /&gt;
&lt;br /&gt;
Note also that all of Fine Hall printers default to duplex (double sided) printing so you may have to change your printer settings if you want to print single sided.&lt;br /&gt;
&lt;br /&gt;
== Detailed instructions ==&lt;br /&gt;
Example instructions on how to setup one of these printers, for example fine305, on your computer follow.  First click on &amp;quot;Start&amp;quot; button (1) and then on &amp;quot;Run&amp;quot; (2):&lt;br /&gt;
[[Image:Fs-startrun.jpg|center]]&lt;br /&gt;
In the &amp;quot;Run&amp;quot; window that will come up type the printer address from above table in &amp;quot;Open&amp;quot; (1). In our case for printer 305 it is ''&amp;lt;tt&amp;gt;\\finehallprint\fine305&amp;lt;/tt&amp;gt;''.  Then click on &amp;quot;OK&amp;quot; (2):&lt;br /&gt;
[[Image:Finehallprint-run305.jpg|center]]&lt;br /&gt;
Your computer will then attempt to connect and it may ask you to confirm the installation with a dialog that resembles the following where you should click on &amp;quot;Yes&amp;quot; (1):&lt;br /&gt;
[[Image:Finehallprint-confirm.jpg|center]]&lt;br /&gt;
That's it - you should now be able to use this printer.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1803</id>
		<title>HowTos</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1803"/>
		<updated>2010-03-18T19:56:06Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find instructions on how to do some of the more common computing tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Certificates ==&lt;br /&gt;
We used to rely on our own unsigned SSL certificates for Math web servers and e-mail but as of recently we have replaced them with [http://certs.ipsca.com/ ipsCA]'s signed certificates.  ipsCA is generously providing high quality free SSL certificates to educational institutions.  &lt;br /&gt;
&lt;br /&gt;
All recent browsers and e-mail clients have appropriate root certificate that can be used to verify identity of our servers.  Therefore no additional importing of certificates should be required. If you encounter any problems with our SSL certificates please let us know (like if your browser or e-mail client cannot recognize or verify our SSL certificate).&lt;br /&gt;
&lt;br /&gt;
== Connect to Math/PACM systems remotely ==&lt;br /&gt;
There are a number of different ways to access Math/PACM systems and services - login servers, computational machines, E-mail, files on file server and others.  Here are some of these ways:&lt;br /&gt;
* [[HowTos:Access your files on Math/PACM file server via cifs/samba|Access your files on Math/PACM file server via cifs/samba on Windows, Mac OS X or Linux]] - directly access your files on the file server, on campus or after connecting via VPN&lt;br /&gt;
* [[HowTos:Connect to login servers via ssh|Connect to login servers via ssh from Windows, Mac OS X or Linux]] (also copy files back and forth by using ssh/scp)&lt;br /&gt;
* [[HowTos:Remote Linux Desktop access|Remote Linux Desktop access]]&lt;br /&gt;
For E-mail reading/access only please read below.&lt;br /&gt;
&lt;br /&gt;
== E-mail access and configuration ==&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird on Math Linux machines|Configure Thunderbird 2.* on Math Linux workstations]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird|Configure Thunderbird 2.* in general]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird 3|Configure Thunderbird 3.* in general]]&lt;br /&gt;
* [[HowTos:Read E-mail with webmail|Read your e-mail in your web browser by using Horde/IMP webmail]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Outlook 2007.* in general|E-mail configuration for Outlook 2007 in general]]&lt;br /&gt;
&lt;br /&gt;
== File restore/undelete/backup/snapshots ==&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Linux from home directory on Math file server|How to restore deleted files or previous versions on Linux from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Windows from home directory on Math file server|How to restore deleted files or previous versions on Windows from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Mac OS X from home directory on Math file server|How to restore deleted files or previous versions on Mac OS X from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from backups|How to obtain files from backups]] (for files deleted or changed more than 4 days ago and usually not more than 3-4 months ago)&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
* [[HowTos:Configure MacOSX for Dell W5300n|How to configure your Macintosh for printing with the Dell printers on 11th and 5th floor (W5300n)]]&lt;br /&gt;
* [[HowTos:Configure Windows Printing for Fine Hall|How to configure your Microsoft Windows computer for printing to public printers in Fine Hall]]&lt;br /&gt;
&lt;br /&gt;
== TeX ==&lt;br /&gt;
* [[HowTos:Install TeX on a Microsoft Windows computer|A quick HowTo about installing TeX on a Microsoft Windows computer]]&lt;br /&gt;
* [[HowTos:Add TeX to your webpage|How to add good looking TeX code to your webpages on Math webserver]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1802</id>
		<title>HowTos</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1802"/>
		<updated>2010-03-18T19:53:28Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* E-mail access and configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find instructions on how to do some of the more common computing tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Certificates ==&lt;br /&gt;
We used to rely on our own unsigned SSL certificates for Math web servers and e-mail but as of recently we have replaced them with [http://certs.ipsca.com/ ipsCA]'s signed certificates.  ipsCA is generously providing high quality free SSL certificates to educational institutions.  &lt;br /&gt;
&lt;br /&gt;
All recent browsers and e-mail clients have appropriate root certificate that can be used to verify identity of our servers.  Therefore no additional importing of certificates should be required. If you encounter any problems with our SSL certificates please let us know (like if your browser or e-mail client cannot recognize or verify our SSL certificate).&lt;br /&gt;
&lt;br /&gt;
== Connect to Math/PACM systems remotely ==&lt;br /&gt;
There are a number of different ways to access Math/PACM systems and services - login servers, computational machines, E-mail, files on file server and others.  Here are some of these ways:&lt;br /&gt;
* [[HowTos:Access your files on Math/PACM file server via cifs/samba|Access your files on Math/PACM file server via cifs/samba on Windows, Mac OS X or Linux]] - directly access your files on the file server, on campus or after connecting via VPN&lt;br /&gt;
* [[HowTos:Connect to login servers via ssh|Connect to login servers via ssh from Windows, Mac OS X or Linux]] (also copy files back and forth by using ssh/scp)&lt;br /&gt;
* [[HowTos:Remote Linux Desktop access|Remote Linux Desktop access]]&lt;br /&gt;
For E-mail reading/access only please read below.&lt;br /&gt;
&lt;br /&gt;
== E-mail access and configuration ==&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird on Math Linux machines|Configure Thunderbird 2.* on Math Linux workstations]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird|Configure Thunderbird 2.* in general]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird 3|Configure Thunderbird 3.* in general]]&lt;br /&gt;
* [[Read E-mail with webmail|Read your e-mail in your web browser by using Horde/IMP webmail]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Outlook 2007.* in general|E-mail configuration for Outlook 2007 in general]]&lt;br /&gt;
&lt;br /&gt;
== File restore/undelete/backup/snapshots ==&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Linux from home directory on Math file server|How to restore deleted files or previous versions on Linux from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Windows from home directory on Math file server|How to restore deleted files or previous versions on Windows from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Mac OS X from home directory on Math file server|How to restore deleted files or previous versions on Mac OS X from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from backups|How to obtain files from backups]] (for files deleted or changed more than 4 days ago and usually not more than 3-4 months ago)&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
* [[HowTos:Configure MacOSX for Dell W5300n|How to configure your Macintosh for printing with the Dell printers on 11th and 5th floor (W5300n)]]&lt;br /&gt;
* [[HowTos:Configure Windows Printing for Fine Hall|How to configure your Microsoft Windows computer for printing to public printers in Fine Hall]]&lt;br /&gt;
&lt;br /&gt;
== TeX ==&lt;br /&gt;
* [[HowTos:Install TeX on a Microsoft Windows computer|A quick HowTo about installing TeX on a Microsoft Windows computer]]&lt;br /&gt;
* [[HowTos:Add TeX to your webpage|How to add good looking TeX code to your webpages on Math webserver]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1769</id>
		<title>Frequently Asked Questions</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1769"/>
		<updated>2008-05-29T13:08:35Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: add note about gp computations&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page you will find answers to some of most frequently asked questions about computing at Fine Hall.&lt;br /&gt;
&lt;br /&gt;
== E-mail ==&lt;br /&gt;
=== Read e-mail ===&lt;br /&gt;
You can read your e-mail in one of the following ways: &lt;br /&gt;
* login via ssh to math.princeton.edu or pacm.princeton.edu and use pine, mutt or other terminal based E-mail clients&lt;br /&gt;
* use [http://www.math.princeton.edu/mail Math Dept. WebMail]&lt;br /&gt;
* configure your e-mail client (like Thunderbird, Mozilla, Outlook and others) to access your e-mail via IMAP by following instructions in [[HowTos#E-mail configuration|HowTos section about E-mail configuration]]&lt;br /&gt;
&lt;br /&gt;
=== Forward e-mail from Princeton to your Math/PACM account ===&lt;br /&gt;
Open up in your browser [http://www.princeton.edu/imap OIT Account Management Page].  You will be asked for your OIT username and password.  Once logged in click on &amp;quot;Set Email Delivery&amp;quot; link on the left.  That will bring &amp;quot;Where Is My Mail Going&amp;quot; information - if you you haven't changed your e-mail delivery location from default it is likely to be &amp;lt;tt&amp;gt;yourusername@mail.Princeton.EDU&amp;lt;/tt&amp;gt;.  Click on &amp;quot;Change Entry&amp;quot; button (found next to current delivery E-mail) and on next screen forward your Priceton email by setting your primary mail delivery location to your &amp;lt;tt&amp;gt;yourusername@math.princeton.edu&amp;lt;/tt&amp;gt; E-mail address and then click on &amp;quot;Submit Changes&amp;quot;.&lt;br /&gt;
=== Forward e-mail from your math account ===&lt;br /&gt;
To forward all of your math e-mail to another account, e.g. if you are leaving Princeton, create in your home directory .forward file that contains the e-mail address where your email will be forwarded.  You can specify multiple E-mail addresses, each on its own line.&lt;br /&gt;
=== Vacation messages ===&lt;br /&gt;
Vacation messages can be set through Math/PACM webmail by going to  [https://www.math.princeton.edu/horde3/vacation/ https://www.math.princeton.edu/horde3/vacation/].  You can also find vacation setting webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
At vacation webpage you can turn the vacation message on and off, specify the subject and content of vacation message replies and specify how often to send vacation message replies.  Finally you can even set vacation start and end times.&lt;br /&gt;
&lt;br /&gt;
== Passwords ==&lt;br /&gt;
=== Types of passwords ===&lt;br /&gt;
Your Math/PACM account has two passwords associated with it - the Linux/LDAP password which is used for everything except accessing the fileserver through windows file sharing (also called smb, cifs of samba file sharing) and the windows/cifs password.&lt;br /&gt;
=== Password changing ===&lt;br /&gt;
If you need to change your password you should do it through the [https://www.math.princeton.edu/horde3/passwd/ Math/PACM webmail interface].  This way your LDAP password will be changed together with your windows/cifs password and therefore this will ensure they are the same. One logged in with your current password you will be prompted for your current and new password.&lt;br /&gt;
&lt;br /&gt;
You can also find password changing webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
== Running computations ==&lt;br /&gt;
=== Computation guidelines ===&lt;br /&gt;
These are the guidelines for running computations on Math/PACM machines:&lt;br /&gt;
* Unless running computations on dedicated machines (like Comp or Macomp cluster or your own desktop) all jobs should be reniced to -19, e.g.:&lt;br /&gt;
 nice -n 19 mycomputation&lt;br /&gt;
This will achieve that users of the machine you are using for your calculations are not impacted in their interactive use.  Your job will still get all the available free CPU time.&lt;br /&gt;
* Please make sure your computation does not consume too much memory.  This is particularly important if you intend to run your computations on desktops used by others.  Too much memory use on machines that do not have much to begin with will push the operating system into swapping which will severly impact both the user of the desktop and your own computation.  Most Fine Hall desktops have only 512MB so you should make sure your job doesn't consume more than, say, 100MB or so - the less the better.  &lt;br /&gt;
* If your job requires a lot of memory and you do not have access to macomp cluster please feel free to run it on the login server - math.princeton.edu - which has both a pair of very fast processors and 4GB of memory.  You should still limit your job to not more than 2GB of memory (or 3GB but only for a short period of time).  Also take in account your per job memory consumption and the number of jobs you and others are running already on math.princeton.edu.  E.g. running more than 1 computation that requires 2GB or more will quickly produce a non productive environment for all the users.&lt;br /&gt;
* Computational jobs on math.princeton.edu are automatically reniced and you should limit yourself to at most 2 computations at any one time.  If your computation is a long lasting one you do not have to renice your job but if you intend to run lots of short ones please do so (as automatic renicing does not kick in immediately).&lt;br /&gt;
=== Run computations and disconnect ===&lt;br /&gt;
If your computation is a long lasting one it is best if you start it up in such a way that you can logout and the computation will continue.  This also prevents your computations from failing if your loose network connectivity.  TO achieve this you should run your computations with nohup command.  Nohup command will make sure the job is disconnected from the terminal, or in other words it will make sure that when you disconnect your job will not get the &amp;quot;I have loged out, please quit&amp;quot; signal.  For example you should type something like this:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; my_output.txt 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
This will run your job, reniced to 19, and any output (both regular and error) will be placed into file my_output.txt.    In other words If you want to place the error output into another file you can do:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; my_output.txt 2&amp;gt; my_erroroutput.txt &amp;amp;&lt;br /&gt;
If you do not need the output from the command, e.g. because your program dumps its results directly into various files, you can redirect all of the other output into /dev/null:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; /dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
&lt;br /&gt;
=== Run disconnected computations with matlab ===&lt;br /&gt;
If you want to run matlab computations you can do something like:&lt;br /&gt;
 nohup nice -19 matlab -nodisplay -nodesktop -nojvm -nosplash &amp;lt; mymatlab_commands.m &amp;gt; my_output.txt 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
&lt;br /&gt;
=== Run disconnected computations with gp ===&lt;br /&gt;
For computations with gp (pari) just write your gp commands in a text file, say my_commands.gp, and then run the computation with something like&lt;br /&gt;
 nohup nice -19 gp &amp;lt; my_commands.gp &amp;gt; my_output.txt 2&amp;gt;&amp;amp;1 &amp;amp;&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Main_Page&amp;diff=1768</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Main_Page&amp;diff=1768"/>
		<updated>2007-12-18T20:41:47Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Quick Links ==&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;2&amp;quot; width=&amp;quot;90%&amp;quot;&lt;br /&gt;
! [http://math.princeton.edu/ssh.html http://cgi.math.princeton.edu/compudocwiki/images/7/74/Terminal.jpg]!![https://www.math.princeton.edu/mail http://cgi.math.princeton.edu/compudocwiki/images/2/24/Email.jpg]&lt;br /&gt;
|-&lt;br /&gt;
! Web SSH http://math.princeton.edu/ssh.shtml &amp;lt;br&amp;gt; Alternate Web SSH http://math.princeton.edu/ssh2.shtml!! Webmail https://www.math.princeton.edu/mail&lt;br /&gt;
|- style=&amp;quot;height:40px&amp;quot;&lt;br /&gt;
! &lt;br /&gt;
|- &lt;br /&gt;
! [https://www.math.princeton.edu/horde3/passwd/ http://cgi.math.princeton.edu/compudocwiki/images/6/64/Password.jpg] !! [https://www.math.princeton.edu/horde3/vacation/ http://cgi.math.princeton.edu/compudocwiki/images/3/3a/Vacation.jpg]&lt;br /&gt;
|- &lt;br /&gt;
! Change Password !! Set Vacation Message&lt;br /&gt;
|- &lt;br /&gt;
|+&amp;amp;nbsp;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Contact us or ask for help =&lt;br /&gt;
In order to contact Math/PACM computing support please e-mail [mailto:compudoc@princeton.edu compudoc@princeton.edu].&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1767</id>
		<title>Documentation and Information:SGE</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1767"/>
		<updated>2007-10-12T18:05:47Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: added matlab usage&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
In Math Dept./PACM we use the Sun Grid Engine (SGE from now on) for job submission and management on various clusters.  This page contains information on SGE/Sun Grid Engine usage on those clusters.&lt;br /&gt;
&lt;br /&gt;
All jobs on the cluster have to be submitted through the SGE.  SGE will queue up your job and then choose free node(s) on which it will be run.  If there are no free nodes, or not enough of them, your job will wait in the queue until appropriate resources are available and then the job will be executed.&lt;br /&gt;
&lt;br /&gt;
Before proceeding you may want to first read [[Documentation_and_Information:Modules|documentation about modules]] because you are likely to have to use them if you will be using MPI or if you will be using compilers different from gcc (like PGI or Intel).  We will also refer to modules and show them in examples below.&lt;br /&gt;
&lt;br /&gt;
== Basic SGE usage ==&lt;br /&gt;
When submitting a job you will first have to create a submission script that will, when executed, launch your actual computation.  The submission script can also contain various options that will be interpreted by SGE and that will influence how your job is executed.  &lt;br /&gt;
&lt;br /&gt;
=== Serial jobs/qsub ===&lt;br /&gt;
We will begin with a serial job, i.e. a job that will run on only one processor.  Create a submission script, for example call it myjob.sh (.sh extension because this is going to be a bash/sh script but the extension is not necessary - you can choose any name).  We will be running myjobexecutable located in myjobdir:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 &lt;br /&gt;
 # this executable was compiled with intel compiler so we need to load the intel module so that all the libraries will work and be found&lt;br /&gt;
 module load intel&lt;br /&gt;
 # and now the actual executable&lt;br /&gt;
 $HOME/myjobdir/myjobexecutable option1 option2&lt;br /&gt;
This job can then be submitted with qsub command and we will call this job run &amp;quot;Job_name&amp;quot;:&lt;br /&gt;
 qsub -N Job_name myjob.sh&lt;br /&gt;
SGE will queue up the job and assign it a number (say 3233 - as in 3233rd job).  Then on you can refer to this job by either the name you used during submission (&amp;quot;-N&amp;quot; option) or else by its number (3233 in this case).&lt;br /&gt;
&lt;br /&gt;
If the job, i.e. myjobexecutable, outputs anything on the terminal SGE will redirect that output (stdout) and errors (stderror) into files called like Job_name.o3233 (for stdout) and Job_name.e3233 (for stderror) located in the same directory where the job was submitted.  These files should be the first place to look at if you need to debug errors in your program or the submission script.&lt;br /&gt;
&lt;br /&gt;
=== Basic qsub options ===&lt;br /&gt;
We've already seen &amp;quot;-N&amp;quot; option but there are two other options that were placed in the submission script itself, instead of specifying them on the command line.  Any option that qsub understands and are used on the command line can also be specified in the submission script.  You put such option(s) in a line of its own whose beginning is &amp;quot;#$&amp;quot;. For example, instead of specifying &amp;quot;-N Job_name&amp;quot; we could've added the following line to the above script and submitted the job with just &amp;quot;qsub myjob.sh&amp;quot;:&lt;br /&gt;
 #$ -N Job_name&lt;br /&gt;
&amp;quot;-cwd&amp;quot; and &amp;quot;-V&amp;quot; options were already seen in myjob.sh sample script. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;-cwd&amp;quot; makes it so that the job is executed in the directory where it was submitted.  If this option is missing the job will be executed in your home directory.  You will almost always want this option which is why it is convenient to place it in your submission scripts.  The main reason why it is useful is if the job reads input files (say initial conditions from a file INPUT) and/or creates output files (say OUTPUT) in the current working directory then you will want to create different directories for each of your runs and submit your jobs from those directories.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;-V&amp;quot; option makes sure that your job has the same environment variables as the shell in which you submit the job.  Again, this is an option that is prudent to always have though it shouldn't be depended on completely (because in MPI case the slave jobs might not actually respect this option, unlike the master node that always will).&lt;br /&gt;
&lt;br /&gt;
There are numerous other options that you can use - some are listed below and others can be found on qsub's man page.&lt;br /&gt;
&lt;br /&gt;
=== Cluster/job status ===&lt;br /&gt;
Now that you know how to submit the job you will want to also know how to check on its status as well as on the status of the cluster. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;qstat&amp;quot; will show you the status of Grid Engine jobs and queues.  For example:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat&lt;br /&gt;
 job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID&lt;br /&gt;
 -----------------------------------------------------------------------------------------------------------------&lt;br /&gt;
  232629 0.51000 IMAGE005   mathuser     r     06/18/2006 14:29:48 all.q@comp-02                      4&lt;br /&gt;
  231554 0.52111 Pt_Al_vac  student      r     06/16/2006 14:47:32 all.q@comp-04                      3&lt;br /&gt;
  232626 0.51000 IMAGE002   professor    r     06/18/2006 14:29:33 all.q@comp-11                      1&lt;br /&gt;
  232597 0.52333 O_img3     someoneelse  r     06/18/2006 13:16:48 all.q@comp-16                      6&lt;br /&gt;
&lt;br /&gt;
If you type &amp;quot;qstat -f&amp;quot; you will get a detailed list of queues (on each host) and jobs in that queue.  &lt;br /&gt;
&lt;br /&gt;
You can get extensive details about a job with &amp;quot;qstat -j jobname/jobnumber&amp;quot;.  This might also be useful to find out why a job is still waiting to be executed (especially when you have submitted the job with some requirements, like large memory).&lt;br /&gt;
&lt;br /&gt;
You can get a general picture of how busy the cluster really is by typing &amp;quot;qstat -g c&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat -g c&lt;br /&gt;
 CLUSTER QUEUE                   CQLOAD   USED  AVAIL  TOTAL aoACDS  cdsuE&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 all.q                             0.31     14      1    16      0      1&lt;br /&gt;
&lt;br /&gt;
Finally, you can get a quick view on the status of cluster nodes by running &amp;quot;qhost&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qhost&lt;br /&gt;
 HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 global                  -               -     -       -       -       -       -&lt;br /&gt;
 comp01                  lx26-x86        1  1.00 1011.1M  164.2M 1024.0M   86.4M&lt;br /&gt;
 comp02                  lx26-x86        1  1.04  503.5M  491.7M 1024.0M  628.2M&lt;br /&gt;
 comp03                  lx26-x86        1  2.04  503.6M  334.5M 1024.0M  175.1M&lt;br /&gt;
 comp04                  lx26-x86        1  1.12  503.6M  184.7M 1024.0M  169.6M&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 comp16                  lx26-x86        1  0.00 1011.1M   92.9M 1024.0M     0.0&lt;br /&gt;
&lt;br /&gt;
=== Cancel/modify jobs ===&lt;br /&gt;
If you decide to cancel/delete one of your jobs (or of others if you have been designated as a cluster administrator) you can do it with &amp;quot;qdel&amp;quot; command by using jobs name(s) or job IDs.  You can also delete all jobs for a particular user:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel job_name1&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel 33245 33246 33246&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel -u smith&lt;br /&gt;
If a job is already running and the regular qdel is not working try forcing the removal with &amp;quot;-f&amp;quot; option.  E.g. &amp;quot;qdel -f job_name1&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;qmod&amp;quot; command allows you to modify a job - e.g. you can suspend it, reschedule it, clear error states and so on.&lt;br /&gt;
&lt;br /&gt;
=== Job statistics ===&lt;br /&gt;
After the job execution has ended you can ask SGE for its statistics - e.g. CPU time and memory used during exection:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qacct -j 232741&lt;br /&gt;
&lt;br /&gt;
=== Parallel jobs (MPI - mpich) ===&lt;br /&gt;
Submit script for MPI parallel jobs has to contain a very specific mpirun command.  This is because mpirun needs to be given a list a machines that SGE will reserve for job use.  We also want mpirun to use SGE's rsh command which ensures that the job can be properly monitored and controlled by SGI.  In particular we can then cancel it or view how many CPU cycles it used.  Example submission script for myjobdir/myparallel.exe MPI job compiled with mpich:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # VERY IMPORTANT: load appropriate environment module&lt;br /&gt;
 # in this case this program was compiled with mpich intel version&lt;br /&gt;
 module load mpich/intel&lt;br /&gt;
 # and now run the program&lt;br /&gt;
 mpirun -np $NSLOTS -machinefile $TMPDIR/machines -rsh $TMPDIR/rsh $HOME/myjobdir/myparallel.exe param1 param2&lt;br /&gt;
This is how we submit this job to be executed on 10 hosts with job name Job_name:&lt;br /&gt;
 qsub -N Job_name -pe mpich 10 mympijob.sh&lt;br /&gt;
The key option is &amp;quot;-pe&amp;quot; which accepts 2 parameters, the parallel environment (table of available ones follows) and the number of processors you want to reserve for your job.  Number of processors to use can be also specified with a range, e.g. 10-20, and the SGE will give you as many as are available in that range.  A table describing various options for parallel environment follows.  &lt;br /&gt;
&lt;br /&gt;
The next example is for MPI executables compiled with openmpi - note that the file is different from the one we use for mpich and aside from loading a different module we also use mpiexec instead of mpirun&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # VERY IMPORTANT: load appropriate environment module&lt;br /&gt;
 # in this case this program was compiled with openmpi pgi version&lt;br /&gt;
 module load openmpi/pgi&lt;br /&gt;
 # and now run the program&lt;br /&gt;
 mpiexec -np $NSLOTS $HOME/myjobdir/myparallel.exe param1 param2&lt;br /&gt;
You would submit the above job with a line resembling:&lt;br /&gt;
 qsub -N Job_name -pe openmpi 10 mympijob.sh&lt;br /&gt;
&lt;br /&gt;
== More advanced SGE usage ==&lt;br /&gt;
=== Request a node with lots of memory ===&lt;br /&gt;
If your job will require a lot of memory you can request from SGE to assign you nodes with a minimum amount of free memory by specifying a job resource requirement '''mem_free'''.  E.g.&lt;br /&gt;
 qsub -l mem_free=1G testjob.sh&lt;br /&gt;
would ask for nodes with at least 1GB of free memory.  Similarly, if you want to see which nodes match your requirements you can query for the same resource:&lt;br /&gt;
 qhost -l mem_free=1G&lt;br /&gt;
The output should contain all the nodes that currently have at least 1GB of free memory.&lt;br /&gt;
&lt;br /&gt;
Note that the job will wait in the queue until a host with enough memory is available, in other words until all of your requirements can be met.  To check why a job is waiting just ask for its details with qstat -j jobnum.&lt;br /&gt;
=== Run SGE Mathematica jobs ===&lt;br /&gt;
The simplest way is to create a file with your mathematica commands, say math-input.m, and just input that in the batch file.&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 math &amp;lt; math-input.m&lt;br /&gt;
If we name the above file as math-job.sh and place it in the same directory as math-input.m we can submit it from the same directory with&lt;br /&gt;
 qsub math-job.sh&lt;br /&gt;
The output will be left in file math-job.sh.o#JOBNUM# and any errors in math-job.e#JOBNUM#.&lt;br /&gt;
&lt;br /&gt;
You could also incorporate mathematica commands in the job file itself, rather then have them in a separate file:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 math &amp;lt;&amp;lt;END_MATH_COMMANDS&lt;br /&gt;
 1+1&lt;br /&gt;
 3*3&lt;br /&gt;
 END_MATH_COMMANDS&lt;br /&gt;
The above notations means that everything between &amp;quot;&amp;lt;&amp;lt;END_MATH_COMMANDS&amp;quot; and &amp;quot;END_MATH_COMMANDS&amp;quot; will be used as math program's input.  You can again submit this job with qsub.&lt;br /&gt;
=== Run SGE Matlab jobs ===&lt;br /&gt;
You can run matlab jobs similar to the way mathematica jobs, just use &amp;quot;matlab -nodisplay -nodesktop -nojvm -nosplash&amp;quot; for the command.  E.g. you could do:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 matlab -nodisplay -nodesktop -nojvm -nosplash &amp;lt; math-input.m&lt;br /&gt;
or you could do&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 matlab -nodisplay -nodesktop -nojvm -nosplash -r math-input.m&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1766</id>
		<title>Documentation and Information:SGE</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1766"/>
		<updated>2007-10-09T13:54:30Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: How to submit mathematica jobs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
In Math Dept./PACM we use the Sun Grid Engine (SGE from now on) for job submission and management on various clusters.  This page contains information on SGE/Sun Grid Engine usage on those clusters.&lt;br /&gt;
&lt;br /&gt;
All jobs on the cluster have to be submitted through the SGE.  SGE will queue up your job and then choose free node(s) on which it will be run.  If there are no free nodes, or not enough of them, your job will wait in the queue until appropriate resources are available and then the job will be executed.&lt;br /&gt;
&lt;br /&gt;
Before proceeding you may want to first read [[Documentation_and_Information:Modules|documentation about modules]] because you are likely to have to use them if you will be using MPI or if you will be using compilers different from gcc (like PGI or Intel).  We will also refer to modules and show them in examples below.&lt;br /&gt;
&lt;br /&gt;
== Basic SGE usage ==&lt;br /&gt;
When submitting a job you will first have to create a submission script that will, when executed, launch your actual computation.  The submission script can also contain various options that will be interpreted by SGE and that will influence how your job is executed.  &lt;br /&gt;
&lt;br /&gt;
=== Serial jobs/qsub ===&lt;br /&gt;
We will begin with a serial job, i.e. a job that will run on only one processor.  Create a submission script, for example call it myjob.sh (.sh extension because this is going to be a bash/sh script but the extension is not necessary - you can choose any name).  We will be running myjobexecutable located in myjobdir:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 &lt;br /&gt;
 # this executable was compiled with intel compiler so we need to load the intel module so that all the libraries will work and be found&lt;br /&gt;
 module load intel&lt;br /&gt;
 # and now the actual executable&lt;br /&gt;
 $HOME/myjobdir/myjobexecutable option1 option2&lt;br /&gt;
This job can then be submitted with qsub command and we will call this job run &amp;quot;Job_name&amp;quot;:&lt;br /&gt;
 qsub -N Job_name myjob.sh&lt;br /&gt;
SGE will queue up the job and assign it a number (say 3233 - as in 3233rd job).  Then on you can refer to this job by either the name you used during submission (&amp;quot;-N&amp;quot; option) or else by its number (3233 in this case).&lt;br /&gt;
&lt;br /&gt;
If the job, i.e. myjobexecutable, outputs anything on the terminal SGE will redirect that output (stdout) and errors (stderror) into files called like Job_name.o3233 (for stdout) and Job_name.e3233 (for stderror) located in the same directory where the job was submitted.  These files should be the first place to look at if you need to debug errors in your program or the submission script.&lt;br /&gt;
&lt;br /&gt;
=== Basic qsub options ===&lt;br /&gt;
We've already seen &amp;quot;-N&amp;quot; option but there are two other options that were placed in the submission script itself, instead of specifying them on the command line.  Any option that qsub understands and are used on the command line can also be specified in the submission script.  You put such option(s) in a line of its own whose beginning is &amp;quot;#$&amp;quot;. For example, instead of specifying &amp;quot;-N Job_name&amp;quot; we could've added the following line to the above script and submitted the job with just &amp;quot;qsub myjob.sh&amp;quot;:&lt;br /&gt;
 #$ -N Job_name&lt;br /&gt;
&amp;quot;-cwd&amp;quot; and &amp;quot;-V&amp;quot; options were already seen in myjob.sh sample script. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;-cwd&amp;quot; makes it so that the job is executed in the directory where it was submitted.  If this option is missing the job will be executed in your home directory.  You will almost always want this option which is why it is convenient to place it in your submission scripts.  The main reason why it is useful is if the job reads input files (say initial conditions from a file INPUT) and/or creates output files (say OUTPUT) in the current working directory then you will want to create different directories for each of your runs and submit your jobs from those directories.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;-V&amp;quot; option makes sure that your job has the same environment variables as the shell in which you submit the job.  Again, this is an option that is prudent to always have though it shouldn't be depended on completely (because in MPI case the slave jobs might not actually respect this option, unlike the master node that always will).&lt;br /&gt;
&lt;br /&gt;
There are numerous other options that you can use - some are listed below and others can be found on qsub's man page.&lt;br /&gt;
&lt;br /&gt;
=== Cluster/job status ===&lt;br /&gt;
Now that you know how to submit the job you will want to also know how to check on its status as well as on the status of the cluster. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;qstat&amp;quot; will show you the status of Grid Engine jobs and queues.  For example:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat&lt;br /&gt;
 job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID&lt;br /&gt;
 -----------------------------------------------------------------------------------------------------------------&lt;br /&gt;
  232629 0.51000 IMAGE005   mathuser     r     06/18/2006 14:29:48 all.q@comp-02                      4&lt;br /&gt;
  231554 0.52111 Pt_Al_vac  student      r     06/16/2006 14:47:32 all.q@comp-04                      3&lt;br /&gt;
  232626 0.51000 IMAGE002   professor    r     06/18/2006 14:29:33 all.q@comp-11                      1&lt;br /&gt;
  232597 0.52333 O_img3     someoneelse  r     06/18/2006 13:16:48 all.q@comp-16                      6&lt;br /&gt;
&lt;br /&gt;
If you type &amp;quot;qstat -f&amp;quot; you will get a detailed list of queues (on each host) and jobs in that queue.  &lt;br /&gt;
&lt;br /&gt;
You can get extensive details about a job with &amp;quot;qstat -j jobname/jobnumber&amp;quot;.  This might also be useful to find out why a job is still waiting to be executed (especially when you have submitted the job with some requirements, like large memory).&lt;br /&gt;
&lt;br /&gt;
You can get a general picture of how busy the cluster really is by typing &amp;quot;qstat -g c&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat -g c&lt;br /&gt;
 CLUSTER QUEUE                   CQLOAD   USED  AVAIL  TOTAL aoACDS  cdsuE&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 all.q                             0.31     14      1    16      0      1&lt;br /&gt;
&lt;br /&gt;
Finally, you can get a quick view on the status of cluster nodes by running &amp;quot;qhost&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qhost&lt;br /&gt;
 HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 global                  -               -     -       -       -       -       -&lt;br /&gt;
 comp01                  lx26-x86        1  1.00 1011.1M  164.2M 1024.0M   86.4M&lt;br /&gt;
 comp02                  lx26-x86        1  1.04  503.5M  491.7M 1024.0M  628.2M&lt;br /&gt;
 comp03                  lx26-x86        1  2.04  503.6M  334.5M 1024.0M  175.1M&lt;br /&gt;
 comp04                  lx26-x86        1  1.12  503.6M  184.7M 1024.0M  169.6M&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 comp16                  lx26-x86        1  0.00 1011.1M   92.9M 1024.0M     0.0&lt;br /&gt;
&lt;br /&gt;
=== Cancel/modify jobs ===&lt;br /&gt;
If you decide to cancel/delete one of your jobs (or of others if you have been designated as a cluster administrator) you can do it with &amp;quot;qdel&amp;quot; command by using jobs name(s) or job IDs.  You can also delete all jobs for a particular user:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel job_name1&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel 33245 33246 33246&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel -u smith&lt;br /&gt;
If a job is already running and the regular qdel is not working try forcing the removal with &amp;quot;-f&amp;quot; option.  E.g. &amp;quot;qdel -f job_name1&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;qmod&amp;quot; command allows you to modify a job - e.g. you can suspend it, reschedule it, clear error states and so on.&lt;br /&gt;
&lt;br /&gt;
=== Job statistics ===&lt;br /&gt;
After the job execution has ended you can ask SGE for its statistics - e.g. CPU time and memory used during exection:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qacct -j 232741&lt;br /&gt;
&lt;br /&gt;
=== Parallel jobs (MPI - mpich) ===&lt;br /&gt;
Submit script for MPI parallel jobs has to contain a very specific mpirun command.  This is because mpirun needs to be given a list a machines that SGE will reserve for job use.  We also want mpirun to use SGE's rsh command which ensures that the job can be properly monitored and controlled by SGI.  In particular we can then cancel it or view how many CPU cycles it used.  Example submission script for myjobdir/myparallel.exe MPI job compiled with mpich:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # VERY IMPORTANT: load appropriate environment module&lt;br /&gt;
 # in this case this program was compiled with mpich intel version&lt;br /&gt;
 module load mpich/intel&lt;br /&gt;
 # and now run the program&lt;br /&gt;
 mpirun -np $NSLOTS -machinefile $TMPDIR/machines -rsh $TMPDIR/rsh $HOME/myjobdir/myparallel.exe param1 param2&lt;br /&gt;
This is how we submit this job to be executed on 10 hosts with job name Job_name:&lt;br /&gt;
 qsub -N Job_name -pe mpich 10 mympijob.sh&lt;br /&gt;
The key option is &amp;quot;-pe&amp;quot; which accepts 2 parameters, the parallel environment (table of available ones follows) and the number of processors you want to reserve for your job.  Number of processors to use can be also specified with a range, e.g. 10-20, and the SGE will give you as many as are available in that range.  A table describing various options for parallel environment follows.  &lt;br /&gt;
&lt;br /&gt;
The next example is for MPI executables compiled with openmpi - note that the file is different from the one we use for mpich and aside from loading a different module we also use mpiexec instead of mpirun&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # VERY IMPORTANT: load appropriate environment module&lt;br /&gt;
 # in this case this program was compiled with openmpi pgi version&lt;br /&gt;
 module load openmpi/pgi&lt;br /&gt;
 # and now run the program&lt;br /&gt;
 mpiexec -np $NSLOTS $HOME/myjobdir/myparallel.exe param1 param2&lt;br /&gt;
You would submit the above job with a line resembling:&lt;br /&gt;
 qsub -N Job_name -pe openmpi 10 mympijob.sh&lt;br /&gt;
&lt;br /&gt;
== More advanced SGE usage ==&lt;br /&gt;
=== Request a node with lots of memory ===&lt;br /&gt;
If your job will require a lot of memory you can request from SGE to assign you nodes with a minimum amount of free memory by specifying a job resource requirement '''mem_free'''.  E.g.&lt;br /&gt;
 qsub -l mem_free=1G testjob.sh&lt;br /&gt;
would ask for nodes with at least 1GB of free memory.  Similarly, if you want to see which nodes match your requirements you can query for the same resource:&lt;br /&gt;
 qhost -l mem_free=1G&lt;br /&gt;
The output should contain all the nodes that currently have at least 1GB of free memory.&lt;br /&gt;
&lt;br /&gt;
Note that the job will wait in the queue until a host with enough memory is available, in other words until all of your requirements can be met.  To check why a job is waiting just ask for its details with qstat -j jobnum.&lt;br /&gt;
=== Run SGE Mathematica jobs ===&lt;br /&gt;
The simplest way is to create a file with your mathematica commands, say math-input.m, and just input that in the batch file.&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 math &amp;lt; math-input.m&lt;br /&gt;
If we name the above file as math-job.sh and place it in the same directory as math-input.m we can submit it from the same directory with&lt;br /&gt;
 qsub math-job.sh&lt;br /&gt;
The output will be left in file math-job.sh.o#JOBNUM# and any errors in math-job.e#JOBNUM#.&lt;br /&gt;
&lt;br /&gt;
You could also incorporate mathematica commands in the job file itself, rather then have them in a separate file:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 math &amp;lt;&amp;lt;END_MATH_COMMANDS&lt;br /&gt;
 1+1&lt;br /&gt;
 3*3&lt;br /&gt;
 END_MATH_COMMANDS&lt;br /&gt;
The above notations means that everything between &amp;quot;&amp;lt;&amp;lt;END_MATH_COMMANDS&amp;quot; and &amp;quot;END_MATH_COMMANDS&amp;quot; will be used as math program's input.  You can again submit this job with qsub.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1765</id>
		<title>Documentation and Information:SGE</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1765"/>
		<updated>2007-09-27T20:09:11Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
In Math Dept./PACM we use the Sun Grid Engine (SGE from now on) for job submission and management on various clusters.  This page contains information on SGE/Sun Grid Engine usage on those clusters.&lt;br /&gt;
&lt;br /&gt;
All jobs on the cluster have to be submitted through the SGE.  SGE will queue up your job and then choose free node(s) on which it will be run.  If there are no free nodes, or not enough of them, your job will wait in the queue until appropriate resources are available and then the job will be executed.&lt;br /&gt;
&lt;br /&gt;
Before proceeding you may want to first read [[Documentation_and_Information:Modules|documentation about modules]] because you are likely to have to use them if you will be using MPI or if you will be using compilers different from gcc (like PGI or Intel).  We will also refer to modules and show them in examples below.&lt;br /&gt;
&lt;br /&gt;
== Basic SGE usage ==&lt;br /&gt;
When submitting a job you will first have to create a submission script that will, when executed, launch your actual computation.  The submission script can also contain various options that will be interpreted by SGE and that will influence how your job is executed.  &lt;br /&gt;
&lt;br /&gt;
=== Serial jobs/qsub ===&lt;br /&gt;
We will begin with a serial job, i.e. a job that will run on only one processor.  Create a submission script, for example call it myjob.sh (.sh extension because this is going to be a bash/sh script but the extension is not necessary - you can choose any name).  We will be running myjobexecutable located in myjobdir:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 &lt;br /&gt;
 # this executable was compiled with intel compiler so we need to load the intel module so that all the libraries will work and be found&lt;br /&gt;
 module load intel&lt;br /&gt;
 # and now the actual executable&lt;br /&gt;
 $HOME/myjobdir/myjobexecutable option1 option2&lt;br /&gt;
This job can then be submitted with qsub command and we will call this job run &amp;quot;Job_name&amp;quot;:&lt;br /&gt;
 qsub -N Job_name myjob.sh&lt;br /&gt;
SGE will queue up the job and assign it a number (say 3233 - as in 3233rd job).  Then on you can refer to this job by either the name you used during submission (&amp;quot;-N&amp;quot; option) or else by its number (3233 in this case).&lt;br /&gt;
&lt;br /&gt;
If the job, i.e. myjobexecutable, outputs anything on the terminal SGE will redirect that output (stdout) and errors (stderror) into files called like Job_name.o3233 (for stdout) and Job_name.e3233 (for stderror) located in the same directory where the job was submitted.  These files should be the first place to look at if you need to debug errors in your program or the submission script.&lt;br /&gt;
&lt;br /&gt;
=== Basic qsub options ===&lt;br /&gt;
We've already seen &amp;quot;-N&amp;quot; option but there are two other options that were placed in the submission script itself, instead of specifying them on the command line.  Any option that qsub understands and are used on the command line can also be specified in the submission script.  You put such option(s) in a line of its own whose beginning is &amp;quot;#$&amp;quot;. For example, instead of specifying &amp;quot;-N Job_name&amp;quot; we could've added the following line to the above script and submitted the job with just &amp;quot;qsub myjob.sh&amp;quot;:&lt;br /&gt;
 #$ -N Job_name&lt;br /&gt;
&amp;quot;-cwd&amp;quot; and &amp;quot;-V&amp;quot; options were already seen in myjob.sh sample script. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;-cwd&amp;quot; makes it so that the job is executed in the directory where it was submitted.  If this option is missing the job will be executed in your home directory.  You will almost always want this option which is why it is convenient to place it in your submission scripts.  The main reason why it is useful is if the job reads input files (say initial conditions from a file INPUT) and/or creates output files (say OUTPUT) in the current working directory then you will want to create different directories for each of your runs and submit your jobs from those directories.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;-V&amp;quot; option makes sure that your job has the same environment variables as the shell in which you submit the job.  Again, this is an option that is prudent to always have though it shouldn't be depended on completely (because in MPI case the slave jobs might not actually respect this option, unlike the master node that always will).&lt;br /&gt;
&lt;br /&gt;
There are numerous other options that you can use - some are listed below and others can be found on qsub's man page.&lt;br /&gt;
&lt;br /&gt;
=== Cluster/job status ===&lt;br /&gt;
Now that you know how to submit the job you will want to also know how to check on its status as well as on the status of the cluster. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;qstat&amp;quot; will show you the status of Grid Engine jobs and queues.  For example:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat&lt;br /&gt;
 job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID&lt;br /&gt;
 -----------------------------------------------------------------------------------------------------------------&lt;br /&gt;
  232629 0.51000 IMAGE005   mathuser     r     06/18/2006 14:29:48 all.q@comp-02                      4&lt;br /&gt;
  231554 0.52111 Pt_Al_vac  student      r     06/16/2006 14:47:32 all.q@comp-04                      3&lt;br /&gt;
  232626 0.51000 IMAGE002   professor    r     06/18/2006 14:29:33 all.q@comp-11                      1&lt;br /&gt;
  232597 0.52333 O_img3     someoneelse  r     06/18/2006 13:16:48 all.q@comp-16                      6&lt;br /&gt;
&lt;br /&gt;
If you type &amp;quot;qstat -f&amp;quot; you will get a detailed list of queues (on each host) and jobs in that queue.  &lt;br /&gt;
&lt;br /&gt;
You can get extensive details about a job with &amp;quot;qstat -j jobname/jobnumber&amp;quot;.  This might also be useful to find out why a job is still waiting to be executed (especially when you have submitted the job with some requirements, like large memory).&lt;br /&gt;
&lt;br /&gt;
You can get a general picture of how busy the cluster really is by typing &amp;quot;qstat -g c&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat -g c&lt;br /&gt;
 CLUSTER QUEUE                   CQLOAD   USED  AVAIL  TOTAL aoACDS  cdsuE&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 all.q                             0.31     14      1    16      0      1&lt;br /&gt;
&lt;br /&gt;
Finally, you can get a quick view on the status of cluster nodes by running &amp;quot;qhost&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qhost&lt;br /&gt;
 HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 global                  -               -     -       -       -       -       -&lt;br /&gt;
 comp01                  lx26-x86        1  1.00 1011.1M  164.2M 1024.0M   86.4M&lt;br /&gt;
 comp02                  lx26-x86        1  1.04  503.5M  491.7M 1024.0M  628.2M&lt;br /&gt;
 comp03                  lx26-x86        1  2.04  503.6M  334.5M 1024.0M  175.1M&lt;br /&gt;
 comp04                  lx26-x86        1  1.12  503.6M  184.7M 1024.0M  169.6M&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 comp16                  lx26-x86        1  0.00 1011.1M   92.9M 1024.0M     0.0&lt;br /&gt;
&lt;br /&gt;
=== Cancel/modify jobs ===&lt;br /&gt;
If you decide to cancel/delete one of your jobs (or of others if you have been designated as a cluster administrator) you can do it with &amp;quot;qdel&amp;quot; command by using jobs name(s) or job IDs.  You can also delete all jobs for a particular user:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel job_name1&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel 33245 33246 33246&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel -u smith&lt;br /&gt;
If a job is already running and the regular qdel is not working try forcing the removal with &amp;quot;-f&amp;quot; option.  E.g. &amp;quot;qdel -f job_name1&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;qmod&amp;quot; command allows you to modify a job - e.g. you can suspend it, reschedule it, clear error states and so on.&lt;br /&gt;
&lt;br /&gt;
=== Job statistics ===&lt;br /&gt;
After the job execution has ended you can ask SGE for its statistics - e.g. CPU time and memory used during exection:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qacct -j 232741&lt;br /&gt;
&lt;br /&gt;
=== Parallel jobs (MPI - mpich) ===&lt;br /&gt;
Submit script for MPI parallel jobs has to contain a very specific mpirun command.  This is because mpirun needs to be given a list a machines that SGE will reserve for job use.  We also want mpirun to use SGE's rsh command which ensures that the job can be properly monitored and controlled by SGI.  In particular we can then cancel it or view how many CPU cycles it used.  Example submission script for myjobdir/myparallel.exe MPI job compiled with mpich:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # VERY IMPORTANT: load appropriate environment module&lt;br /&gt;
 # in this case this program was compiled with mpich intel version&lt;br /&gt;
 module load mpich/intel&lt;br /&gt;
 # and now run the program&lt;br /&gt;
 mpirun -np $NSLOTS -machinefile $TMPDIR/machines -rsh $TMPDIR/rsh $HOME/myjobdir/myparallel.exe param1 param2&lt;br /&gt;
This is how we submit this job to be executed on 10 hosts with job name Job_name:&lt;br /&gt;
 qsub -N Job_name -pe mpich 10 mympijob.sh&lt;br /&gt;
The key option is &amp;quot;-pe&amp;quot; which accepts 2 parameters, the parallel environment (table of available ones follows) and the number of processors you want to reserve for your job.  Number of processors to use can be also specified with a range, e.g. 10-20, and the SGE will give you as many as are available in that range.  A table describing various options for parallel environment follows.  &lt;br /&gt;
&lt;br /&gt;
The next example is for MPI executables compiled with openmpi - note that the file is different from the one we use for mpich and aside from loading a different module we also use mpiexec instead of mpirun&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # VERY IMPORTANT: load appropriate environment module&lt;br /&gt;
 # in this case this program was compiled with openmpi pgi version&lt;br /&gt;
 module load openmpi/pgi&lt;br /&gt;
 # and now run the program&lt;br /&gt;
 mpiexec -np $NSLOTS $HOME/myjobdir/myparallel.exe param1 param2&lt;br /&gt;
You would submit the above job with a line resembling:&lt;br /&gt;
 qsub -N Job_name -pe openmpi 10 mympijob.sh&lt;br /&gt;
&lt;br /&gt;
== More advanced SGE usage ==&lt;br /&gt;
=== Request a node with lots of memory ===&lt;br /&gt;
If your job will require a lot of memory you can request from SGE to assign you nodes with a minimum amount of free memory by specifying a job resource requirement '''mem_free'''.  E.g.&lt;br /&gt;
 qsub -l mem_free=1G testjob.sh&lt;br /&gt;
would ask for nodes with at least 1GB of free memory.  Similarly, if you want to see which nodes match your requirements you can query for the same resource:&lt;br /&gt;
 qhost -l mem_free=1G&lt;br /&gt;
The output should contain all the nodes that currently have at least 1GB of free memory.&lt;br /&gt;
&lt;br /&gt;
Note that the job will wait in the queue until a host with enough memory is available, in other words until all of your requirements can be met.  To check why a job is waiting just ask for its details with qstat -j jobnum.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1764</id>
		<title>Documentation and Information:SGE</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:SGE&amp;diff=1764"/>
		<updated>2007-09-27T19:21:36Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
In Math Dept./PACM we use the Sun Grid Engine (SGE from now on) for job submission and management on various clusters.  This page contains information on SGE/Sun Grid Engine usage on those clusters.&lt;br /&gt;
&lt;br /&gt;
All jobs on the cluster have to be submitted through the SGE.  SGE will queue up your job and then choose free node(s) on which it will be run.  If there are no free nodes, or not enough of them, your job will wait in the queue until appropriate resources are available and then the job will be executed.&lt;br /&gt;
&lt;br /&gt;
Before proceeding you may want to first read [[Documentation_and_Information:Modules|documentation about modules]] because you are likely to have to use them if you will be using MPI or if you will be using compilers different from gcc (like PGI or Intel).  We will also refer to modules and show them in examples below.&lt;br /&gt;
&lt;br /&gt;
== Basic SGE usage ==&lt;br /&gt;
When submitting a job you will first have to create a submission script that will, when executed, launch your actual computation.  The submission script can also contain various options that will be interpreted by SGE and that will influence how your job is executed.  &lt;br /&gt;
&lt;br /&gt;
=== Serial jobs/qsub ===&lt;br /&gt;
We will begin with a serial job, i.e. a job that will run on only one processor.  Create a submission script, for example call it myjob.sh (.sh extension because this is going to be a bash/sh script but the extension is not necessary - you can choose any name).  We will be running myjobexecutable located in myjobdir:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 &lt;br /&gt;
 # this executable was compiled with intel compiler so we need to load the intel module so that all the libraries will work and be found&lt;br /&gt;
 module load intel&lt;br /&gt;
 # and now the actual executable&lt;br /&gt;
 $HOME/myjobdir/myjobexecutable option1 option2&lt;br /&gt;
This job can then be submitted with qsub command and we will call this job run &amp;quot;Job_name&amp;quot;:&lt;br /&gt;
 qsub -N Job_name myjob.sh&lt;br /&gt;
SGE will queue up the job and assign it a number (say 3233 - as in 3233rd job).  Then on you can refer to this job by either the name you used during submission (&amp;quot;-N&amp;quot; option) or else by its number (3233 in this case).&lt;br /&gt;
&lt;br /&gt;
If the job, i.e. myjobexecutable, outputs anything on the terminal SGE will redirect that output (stdout) and errors (stderror) into files called like Job_name.o3233 (for stdout) and Job_name.e3233 (for stderror) located in the same directory where the job was submitted.  These files should be the first place to look at if you need to debug errors in your program or the submission script.&lt;br /&gt;
&lt;br /&gt;
=== Basic qsub options ===&lt;br /&gt;
We've already seen &amp;quot;-N&amp;quot; option but there are two other options that were placed in the submission script itself, instead of specifying them on the command line.  Any option that qsub understands and are used on the command line can also be specified in the submission script.  You put such option(s) in a line of its own whose beginning is &amp;quot;#$&amp;quot;. For example, instead of specifying &amp;quot;-N Job_name&amp;quot; we could've added the following line to the above script and submitted the job with just &amp;quot;qsub myjob.sh&amp;quot;:&lt;br /&gt;
 #$ -N Job_name&lt;br /&gt;
&amp;quot;-cwd&amp;quot; and &amp;quot;-V&amp;quot; options were already seen in myjob.sh sample script. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;-cwd&amp;quot; makes it so that the job is executed in the directory where it was submitted.  If this option is missing the job will be executed in your home directory.  You will almost always want this option which is why it is convenient to place it in your submission scripts.  The main reason why it is useful is if the job reads input files (say initial conditions from a file INPUT) and/or creates output files (say OUTPUT) in the current working directory then you will want to create different directories for each of your runs and submit your jobs from those directories.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;-V&amp;quot; option makes sure that your job has the same environment variables as the shell in which you submit the job.  Again, this is an option that is prudent to always have though it shouldn't be depended on completely (because in MPI case the slave jobs might not actually respect this option, unlike the master node that always will).&lt;br /&gt;
&lt;br /&gt;
There are numerous other options that you can use - some are listed below and others can be found on qsub's man page.&lt;br /&gt;
&lt;br /&gt;
=== Cluster/job status ===&lt;br /&gt;
Now that you know how to submit the job you will want to also know how to check on its status as well as on the status of the cluster. &lt;br /&gt;
&lt;br /&gt;
&amp;quot;qstat&amp;quot; will show you the status of Grid Engine jobs and queues.  For example:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat&lt;br /&gt;
 job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID&lt;br /&gt;
 -----------------------------------------------------------------------------------------------------------------&lt;br /&gt;
  232629 0.51000 IMAGE005   mathuser     r     06/18/2006 14:29:48 all.q@comp-02                      4&lt;br /&gt;
  231554 0.52111 Pt_Al_vac  student      r     06/16/2006 14:47:32 all.q@comp-04                      3&lt;br /&gt;
  232626 0.51000 IMAGE002   professor    r     06/18/2006 14:29:33 all.q@comp-11                      1&lt;br /&gt;
  232597 0.52333 O_img3     someoneelse  r     06/18/2006 13:16:48 all.q@comp-16                      6&lt;br /&gt;
&lt;br /&gt;
If you type &amp;quot;qstat -f&amp;quot; you will get a detailed list of queues (on each host) and jobs in that queue.  &lt;br /&gt;
&lt;br /&gt;
You can get extensive details about a job with &amp;quot;qstat -j jobname/jobnumber&amp;quot;.  This might also be useful to find out why a job is still waiting to be executed (especially when you have submitted the job with some requirements, like large memory).&lt;br /&gt;
&lt;br /&gt;
You can get a general picture of how busy the cluster really is by typing &amp;quot;qstat -g c&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qstat -g c&lt;br /&gt;
 CLUSTER QUEUE                   CQLOAD   USED  AVAIL  TOTAL aoACDS  cdsuE&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 all.q                             0.31     14      1    16      0      1&lt;br /&gt;
&lt;br /&gt;
Finally, you can get a quick view on the status of cluster nodes by running &amp;quot;qhost&amp;quot;:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qhost&lt;br /&gt;
 HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS&lt;br /&gt;
 -------------------------------------------------------------------------------&lt;br /&gt;
 global                  -               -     -       -       -       -       -&lt;br /&gt;
 comp01                  lx26-x86        1  1.00 1011.1M  164.2M 1024.0M   86.4M&lt;br /&gt;
 comp02                  lx26-x86        1  1.04  503.5M  491.7M 1024.0M  628.2M&lt;br /&gt;
 comp03                  lx26-x86        1  2.04  503.6M  334.5M 1024.0M  175.1M&lt;br /&gt;
 comp04                  lx26-x86        1  1.12  503.6M  184.7M 1024.0M  169.6M&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 .                       .               .  .       .     .         .        .&lt;br /&gt;
 comp16                  lx26-x86        1  0.00 1011.1M   92.9M 1024.0M     0.0&lt;br /&gt;
&lt;br /&gt;
=== Cancel/modify jobs ===&lt;br /&gt;
If you decide to cancel/delete one of your jobs (or of others if you have been designated as a cluster administrator) you can do it with &amp;quot;qdel&amp;quot; command by using jobs name(s) or job IDs.  You can also delete all jobs for a particular user:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel job_name1&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel 33245 33246 33246&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qdel -u smith&lt;br /&gt;
If a job is already running and the regular qdel is not working try forcing the removal with &amp;quot;-f&amp;quot; option.  E.g. &amp;quot;qdel -f job_name1&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;qmod&amp;quot; command allows you to modify a job - e.g. you can suspend it, reschedule it, clear error states and so on.&lt;br /&gt;
&lt;br /&gt;
=== Job statistics ===&lt;br /&gt;
After the job execution has ended you can ask SGE for its statistics - e.g. CPU time and memory used during exection:&lt;br /&gt;
 [mathuser@comp01 mathuser]$ qacct -j 232741&lt;br /&gt;
&lt;br /&gt;
=== Parallel jobs (MPI - mpich) ===&lt;br /&gt;
Submit script for MPI parallel jobs has to contain a very specific mpirun command.  This is because mpirun needs to be given a list a machines that SGE will reserve for job use.  We also want mpirun to use SGE's rsh command which ensures that the job can be properly monitored and controlled by SGI.  In particular we can then cancel it or view how many CPU cycles it used.  Example submission script for myjobdir/myparallel.exe MPI job:&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 # following option makes sure the job will run in the current directory&lt;br /&gt;
 #$ -cwd&lt;br /&gt;
 # following option makes sure the job has the same environmnent variables as the submission shell&lt;br /&gt;
 #$ -V&lt;br /&gt;
 # load o&lt;br /&gt;
 mpirun -np $NSLOTS -machinefile $TMPDIR/machines -rsh $TMPDIR/rsh $HOME/myjobdir/myparallel.exe param1 param2&lt;br /&gt;
This is how we submit this job to be executed on 10 hosts with job name Job_name:&lt;br /&gt;
 qsub -N Job_name -pe mpich 10 mympijob.sh&lt;br /&gt;
The key option is &amp;quot;-pe&amp;quot; which accepts 2 parameters, the parallel environment (table of available ones follows) and the number of processors you want to reserve for your job.  Number of processors to use can be also specified with a range, e.g. 10-20, and the SGE will give you as many as are available in that range.  A table describing various options for parallel environment follows.  The biggest difference among the different MPICH parallel environments is in the way SGE assignes free CPUs from available CPUs on the same node (as every node has 2 CPUs): &lt;br /&gt;
{|border=&amp;quot;1&amp;quot;&lt;br /&gt;
|+ Available parallel environments&lt;br /&gt;
! SGE Parallel Environment Name !! Parallel Library !! how are CPUs assigned &lt;br /&gt;
|-&lt;br /&gt;
| mpich_fillup || MPICH || try to use both CPUs on each node&lt;br /&gt;
|-&lt;br /&gt;
| mpich || MPICH || try to assign one CPU per node&lt;br /&gt;
|-&lt;br /&gt;
| mpich_double || MPICH || insists on using both CPUs on every assigned node&lt;br /&gt;
|}&lt;br /&gt;
You should probably use mpich_fillup for your MPICH jobs.&lt;br /&gt;
&lt;br /&gt;
== More advanced SGE usage ==&lt;br /&gt;
=== Request a node with lots of memory ===&lt;br /&gt;
If your job will require a lot of memory you can request from SGE to assign you nodes with a minimum amount of free memory by specifying a job resource requirement '''mem_free'''.  E.g.&lt;br /&gt;
 qsub -l mem_free=8G testjob.sh&lt;br /&gt;
would ask for nodes with at least 8GB of free memory.  Similarly, if you want to see which nodes match your requirements you can query for the same resource:&lt;br /&gt;
 qhost -l mem_free=8G&lt;br /&gt;
The output should contain all the nodes that currently have at least 8GB of free memory.&lt;br /&gt;
&lt;br /&gt;
Note that the job will wait in the queue until a host with enough memory is available, in other words until all of your requirements can be met.  To check why a job is waiting just ask for its details with qstat -j jobnum.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Modules&amp;diff=1763</id>
		<title>Documentation and Information:Modules</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Modules&amp;diff=1763"/>
		<updated>2007-09-27T19:06:59Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
On Math/PACM systems we use Environment Modules (or just Modules) to setup various software packages for proper use on the command line.  In particular you will have to use modules in order to start using Intel or PGI compilers, or to compile code with openmpi or mpich.&lt;br /&gt;
&lt;br /&gt;
In essence Environment Modules gives the user the ability to easily modify their unix environment, making it easier to use software packages. No more setting shell environment variables, all the variables can by dynamically loaded to use a particular software package, including finding all the software's dependencies. You can use Environment Modules by running the module command.&lt;br /&gt;
&lt;br /&gt;
== How to use ==&lt;br /&gt;
&lt;br /&gt;
To see all available modules, run module avail. For example:&lt;br /&gt;
&lt;br /&gt;
[mathuser@math ~]$ module avail&lt;br /&gt;
 ------------ /usr/share/Modules/modulefiles ------------ &lt;br /&gt;
 dot         module-cvs  module-info modules     null        use.own     &lt;br /&gt;
 ------------ /usr/local/share/Modules/modulefiles ------------ &lt;br /&gt;
 mpich-debug/gcc/1.2.7p1/32        mpich/intel-9.1/1.2.7p1/32        &lt;br /&gt;
 mpich-debug/intel-10.0/1.2.7p1/32 mpich/pgi-7.0/1.2.7p1/32          &lt;br /&gt;
 mpich-debug/intel-9.1/1.2.7p1/32  openmpi/gcc/1.2.4/32              &lt;br /&gt;
 mpich-debug/pgi-7.0/1.2.7p1/32    openmpi/intel-10.0/1.2.4/32       &lt;br /&gt;
 mpich/gcc/1.2.7p1/32              openmpi/intel-9.1/1.2.4/32        &lt;br /&gt;
 mpich/intel-10.0/1.2.7p1/32       openmpi/pgi-7.0/1.2.4/32          &lt;br /&gt;
 ------------ /opt/share/Modules/modulefiles ------------ &lt;br /&gt;
 intel/9.1/32/C/9.1.051         intel/10.0/32/Fortran/10.0.026 &lt;br /&gt;
 intel/9.1/32/Fortran/9.1.051   intel/10.0/32/Iidb/10.0.026    &lt;br /&gt;
 intel/9.1/32/Iidb/9.1.051      intel/10.0/32/default          &lt;br /&gt;
 intel/9.1/32/default           pgi/7.0/32                     &lt;br /&gt;
 intel/10.0/32/C/10.0.026       &lt;br /&gt;
&lt;br /&gt;
To use a particular module, run module load modulename. You don't need to list the full name of the module, as listed above, if you only use the first component, it will choose the latest version for you. (It actually chooses the last item alphabetically.) For example:&lt;br /&gt;
&lt;br /&gt;
 [mathuser@math ~]$ module load openmpi &lt;br /&gt;
&lt;br /&gt;
To see what modules are loaded, run module list. For example:&lt;br /&gt;
&lt;br /&gt;
 [mathuser@math ~]$ module list&lt;br /&gt;
 Currently Loaded Modulefiles:&lt;br /&gt;
  1) pgi/7.0/32                2) openmpi/pgi-7.0/1.2.4/32 &lt;br /&gt;
&lt;br /&gt;
You can see from the above that the openmpi module automatically loaded OpenMPI built with the PGI compilers, because the letter 'p' comes after 'g' and 'i' in the alphabet. Also, the PGI OpenMPI requires the PGI compilers, so they were automatically loaded.  If you had wanted to load the OpenMPI built with GCC, you simply run module load openmpi/gcc.  If you want to load a specific version of the OpenMPI built with, say, the Intel compiler, run module load openmpi/intel/1.2.4. &lt;br /&gt;
&lt;br /&gt;
Now that the openmpi module is loaded, you can run OpenMPI commands. For example:&lt;br /&gt;
&lt;br /&gt;
 [mathuser@math ~]$ which mpicc&lt;br /&gt;
 /usr/local/openmpi/1.2.4/pgi70/i386/bin/mpicc&lt;br /&gt;
 [mathuser@math ~]$ mpicc -o program program.c&lt;br /&gt;
&lt;br /&gt;
To unload a module, run module unload modulename. For example:&lt;br /&gt;
&lt;br /&gt;
 [mathuser@math ~]$ module unload openmpi&lt;br /&gt;
&lt;br /&gt;
 [mathuser@math ~]$ module list&lt;br /&gt;
 No Modulefiles Currently Loaded.&lt;br /&gt;
&lt;br /&gt;
As you can see, the openmpi module has been unloaded, as well as all its dependencies.&lt;br /&gt;
&lt;br /&gt;
To automatically load modules when you log in to a system, put the module load modulename command in your shell's startup script. For example, if your shell is bash, put the command in your ~/.bashrc, and if your shell is csh, put it in your ~/.cshrc. You can and should also use the module command in your SGE submission script, and it will automatically load the module when the job is run.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1762</id>
		<title>Documentation and Information</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1762"/>
		<updated>2007-09-27T18:59:11Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Computational Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find documentation and information relevant to computing systems at Fine Hall.  If you are looking for specific instructions on how to perform certain tasks then you are more likely to find what you are looking for on [[HowTos]] and [[Frequently_Asked_Questions|Frequently Asked Questions]] pages.&lt;br /&gt;
&lt;br /&gt;
== Introductions ==&lt;br /&gt;
* [[Documentation_and_Information:Getting started with Linux|Getting started with Linux command line]]&lt;br /&gt;
&lt;br /&gt;
== Computational Resources ==&lt;br /&gt;
* [[Documentation_and_Information:Computational clusters in Fine Hall|Computational clusters in Fine Hall]]&lt;br /&gt;
* [[Documentation_and_Information:Computationally related software|Computationally related software]]&lt;br /&gt;
* [[Documentation_and_Information:Modules|How to use environment modules]]&lt;br /&gt;
* [[Documentation_and_Information:SGE|How to use SGE scheduling software]]&lt;br /&gt;
&lt;br /&gt;
== Printers ==&lt;br /&gt;
* [[Documentation_and_Information:Public printers|Publicly accessible printers]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1761</id>
		<title>Documentation and Information</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1761"/>
		<updated>2007-09-27T14:21:59Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Introductions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find documentation and information relevant to computing systems at Fine Hall.  If you are looking for specific instructions on how to perform certain tasks then you are more likely to find what you are looking for on [[HowTos]] and [[Frequently_Asked_Questions|Frequently Asked Questions]] pages.&lt;br /&gt;
&lt;br /&gt;
== Introductions ==&lt;br /&gt;
* [[Documentation_and_Information:Getting started with Linux|Getting started with Linux command line]]&lt;br /&gt;
&lt;br /&gt;
== Computational Resources ==&lt;br /&gt;
* [[Documentation_and_Information:Computational clusters in Fine Hall|Computational clusters in Fine Hall]]&lt;br /&gt;
* [[Documentation_and_Information:Computationally related software|Computationally related software]]&lt;br /&gt;
* [[Documentation_and_Information:Compilers and SGE|How to use compilers and the SGE scheduling software]]&lt;br /&gt;
&lt;br /&gt;
== Printers ==&lt;br /&gt;
* [[Documentation_and_Information:Public printers|Publicly accessible printers]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Getting_started_with_Linux&amp;diff=1760</id>
		<title>Documentation and Information:Getting started with Linux</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Getting_started_with_Linux&amp;diff=1760"/>
		<updated>2007-09-27T14:21:41Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Before proceeding please either be at a Linux workstation (e.g. in Fine Hall) or acquaint yourself with [[HowTos:Connect_to_login_servers_via_ssh|how to connect to connect to login servers via ssh]] and connect to math.princeton.edu or pacm.princeton.edu with ssh in the way appropriate for your computer.  You should have been provided with a username and password for the Math/PACM Linux network.  If you do not and you believe you should please check the [[Help:Contents|help contacts]] for relevant contact info.  &lt;br /&gt;
&lt;br /&gt;
This is a very quick and dirty introduction to command line use on Linux that should help you to get started with command Linux use.  At the end of this documents you can find links to much better and more detailed tutorials.&lt;br /&gt;
&lt;br /&gt;
Before you begin reading this document there is a commands that you should remember very closely &amp;quot;man&amp;quot; - anytime you are at a loss on how to use a particular command or what it does first try &amp;quot;man command&amp;quot;, e.g. &amp;quot;man cp&amp;quot; and you will get a man page with lots of details.  Not every command has a man page but I am sure you will find man essential nevertheless.&lt;br /&gt;
&lt;br /&gt;
== Shell, prompt, commands ==&lt;br /&gt;
After connecting to the cluster with ssh you will be presented with a prompt that resembles the following (here we use sample user &amp;quot;mathuser&amp;quot;):&lt;br /&gt;
 [mathuser@math mathuser]&lt;br /&gt;
The prompt is presented by the shell, the first program that the system runs on your behalf as soon as you connect and that will help you interact with the remote computer by interpreting commands you issue at the prompt.  For example:&lt;br /&gt;
 [mathuser@math mathuser] cp mycalc.c newproject/mycalc-new.c&lt;br /&gt;
where the first word &amp;quot;cp&amp;quot; is the command name and the remainder of the line are command's parameters.&lt;br /&gt;
&lt;br /&gt;
== Shell, prompt, commands ==&lt;br /&gt;
After connecting to the cluster with ssh you will be presented with a prompt that resembles the following (here we use sample user &amp;quot;mathuser&amp;quot;):&lt;br /&gt;
 [mathuser@math mathuser]&lt;br /&gt;
The prompt is presented by the shell, the first program that the system runs on your behalf as soon as you connect and that will help you interact with the remote computer by interpreting commands you issue at the prompt.  For example:&lt;br /&gt;
 [mathuser@math mathuser] cp mycalc.c newproject/mycalc-new.c&lt;br /&gt;
where the first word &amp;quot;cp&amp;quot; is the command name and the remainder of the line are command's parameters.&lt;br /&gt;
&lt;br /&gt;
== Files and directories ==&lt;br /&gt;
File names in Linux can contain virtually any character except &amp;quot;/&amp;quot; which is used as a separator between directories in path names.  For example &amp;quot;/home/mathuser/mycalc.c&amp;quot; would be the full path to mycalc.c in mathuser home directory.&lt;br /&gt;
&lt;br /&gt;
All files on the system are located in the directory tree that starts at the &amp;quot;root&amp;quot; - &amp;quot;/&amp;quot;.  Most of the directory tree is reserved for system use and is carefully partitioned.  For example &amp;quot;/bin&amp;quot; and &amp;quot;/usr/bin&amp;quot; contain many of the commands that you will be using.&lt;br /&gt;
&lt;br /&gt;
Each user has a home directory to holds user documents, configuration files and other data.  For example mathuser home directory would be &amp;quot;/home/mathuser&amp;quot;.  mathuser can also refer to its home directory by an alternate name &amp;quot;~&amp;quot;.  For example, instead of saying &amp;quot;/home/mathuser/newproject/mycalc-new.c&amp;quot; mathuser could also refer to the same file with &amp;quot;~/newproject/mycalc-new.c&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Changing, creating, deleting directories ==&lt;br /&gt;
=== cd, cwd, pwd ===&lt;br /&gt;
Whenever you issue a command you will be doing it from within a certain directory - called a &amp;quot;current working directory&amp;quot; (cwd) and you can change between directories by using &amp;quot;cd&amp;quot; command and &amp;quot;pwd&amp;quot; will print the current working directory (more explanations of following examples follow below):&lt;br /&gt;
 [mathuser@math mathuser] pwd&lt;br /&gt;
 /home/mathuser&lt;br /&gt;
 [mathuser@math mathuser] cd newproject&lt;br /&gt;
 [mathuser@math newproject] pwd&lt;br /&gt;
 /home/mathuser/newproject&lt;br /&gt;
 [mathuser@math newproject] cd ..&lt;br /&gt;
 [mathuser@math mathuser] pwd&lt;br /&gt;
 /home/mathuser&lt;br /&gt;
 [mathuser@math mathuser] cd .&lt;br /&gt;
 [mathuser@math mathuser] pwd&lt;br /&gt;
 /home/mathuser&lt;br /&gt;
Note that the shell is configured by default to not only show you your username and the machine to which you are connected (mathuser@math) but also the name of the current working directory.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;..&amp;quot; (two dots) are a shortcut that means the directory below the current working directory which is how in above example we returned to /home/mathuser.  A single dot &amp;quot;.&amp;quot; refers to the current working directory.&lt;br /&gt;
&lt;br /&gt;
All file references are done relevant to the current working directory.  For example if the cwd is &amp;quot;/home/mathuser&amp;quot; then we can refer to &amp;quot;/home/mathuser/newproject/mycalc-new.c&amp;quot; also as &amp;quot;newproject/mycalc-new.c&amp;quot; or as &amp;quot;./newproject/mycalc-new.c&amp;quot;.  If the cwd was &amp;quot;/home/mathuser/newproject&amp;quot; then we could just refer to it as &amp;quot;mycalc-new.c&amp;quot; or, if we want to get even more complicated and use .. and . &amp;quot;../newproject/./mycalc-new.c&amp;quot;.&lt;br /&gt;
=== mkdir, rmdir = create/delete directory ===&lt;br /&gt;
mkdir will create a directory:&lt;br /&gt;
 [mathuser@math mathuser] mkdir newproject2&lt;br /&gt;
rmdir will remove it:&lt;br /&gt;
 [mathuser@math mathuser] rmdir newproject2&lt;br /&gt;
&lt;br /&gt;
== Basic file operation ==&lt;br /&gt;
=== cp = copy ===&lt;br /&gt;
&amp;quot;cp&amp;quot; command is used to copy files and directories.  For example:&lt;br /&gt;
 [mathuser@math mathuser] cp mycalc.c newproject/mycalc-new.c&lt;br /&gt;
copies mycalc.c files into directory newproject with new name mycalc-new.c (if it already exists it will be overwritten).  You can use wildcards to copy more than one file at a time (but the destination then has to be a directory):&lt;br /&gt;
 [mathuser@math mathuser] cp *.c newproject&lt;br /&gt;
copies all files that end with .c into directory newproject.  An alternate way to do this would be:&lt;br /&gt;
 [mathuser@math mathuser] cd newproject&lt;br /&gt;
 [mathuser@math newproject] cp ../*.c .&lt;br /&gt;
note the use of .. to refer to the directory below the newproject directory and of the &amp;quot;.&amp;quot; to refer to the current working directory.  The same could've been accomplished with:&lt;br /&gt;
 [mathuser@math mathuser] cd newproject&lt;br /&gt;
 [mathuser@math newproject] cp ~/*.c /home/mathuser/newproject/&lt;br /&gt;
cp command, just like most other commands, accepts options that can modify its behaviour.  For example if you were beginning to work on newproject2007 and wanted start by copying all the files from newproject you could do the following:&lt;br /&gt;
 [mathuser@math mathuser] cp -r newproject newproject2007&lt;br /&gt;
which could recursively (&amp;quot;-r&amp;quot;) copy the directory newproject to directory newproject2007 (caveat: if the directory newproject2007 existed you would get as a result a copy of newproject as newproject2007/newproject, if it didn't exist newproject2007 directory would be created and its contents would match newproject contents).&lt;br /&gt;
&lt;br /&gt;
=== rm = remove files ===&lt;br /&gt;
rm command will remove files (and directories with special flags). For example:&lt;br /&gt;
 [mathuser@math mathuser] rm newproject/*.c&lt;br /&gt;
or if we wanted to remove the directory newproject with all of its contents:&lt;br /&gt;
 [mathuser@math mathuser] rm -rf newproject&lt;br /&gt;
which forces the removal of newproject directory and all of its subdirectories.  Be VERY careful with &amp;quot;-rf&amp;quot; options.&lt;br /&gt;
&lt;br /&gt;
=== mv = move files ===&lt;br /&gt;
mv command can be used to move/rename files.  For example the following will rename myfftw2.c to fftw3.c&lt;br /&gt;
 [mathuser@math mathuser] mv myfftw2.c fftw3.c&lt;br /&gt;
and the following will move all the fortran files into f90 subdirectory (that needs to exist already):&lt;br /&gt;
 [mathuser@math mathuser] mv *.f90 *.f77 *.f f90/&lt;br /&gt;
&lt;br /&gt;
=== ls = list files ===&lt;br /&gt;
ls command will list files in the current (with no params) or given directory:&lt;br /&gt;
 [mathuser@math mathuser] ls newproject&lt;br /&gt;
 atlas.f90 fftw3.c fftw3.h Makefile gftw.c&lt;br /&gt;
 [mathuser@math mathuser] ls -d newproject&lt;br /&gt;
 newproject&lt;br /&gt;
 [mathuser@math mathuser] ls newproject/*.c&lt;br /&gt;
 fftw3.c gftw.c&lt;br /&gt;
 [mathuser@math mathuser] ls -ld newproject/*.c newproject&lt;br /&gt;
 -rw-r-----   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 -rw-rw-r--   1 mathuser ugrad  111921 Feb  3 09:40 newproject/gftw.c&lt;br /&gt;
 drwx------   5 mathuser ugrad    4096 May 16 11:28 newproject&lt;br /&gt;
&amp;quot;-l&amp;quot; will make ls produce a long listing with details for every file and &amp;quot;-d&amp;quot; option makes it so that ls doesn't show the contents of the directory but the directory itself.&lt;br /&gt;
&lt;br /&gt;
Consider the detailed listing - besides the file name at the end of the line you can also find the time the file was modified, the name of the user that owns the file (mathuser), the name of the group associated with the file (used only to determine who can access the file or directory in question) and the file size (in bytes).&lt;br /&gt;
&lt;br /&gt;
The first field &amp;quot;-rw-rw-r--&amp;quot; or &amp;quot;drwx------&amp;quot; shows the type of the file and access rights to the file.  The first letter will be &amp;quot;-&amp;quot; for files, &amp;quot;d&amp;quot; for directories and &amp;quot;l&amp;quot; for symbolic links (more on them later).  The remaining characters determine access rights to the file - for the user (letters 2,3,4), for the group (5,6,7) and for everyone else (8,9,10).  These are some of the possible rights:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|+ File/Directory rights&lt;br /&gt;
! File Type !! Right !! Explanation&lt;br /&gt;
|-&lt;br /&gt;
| file || r || file is readable&lt;br /&gt;
|-&lt;br /&gt;
| file || w || file is writeable&lt;br /&gt;
|-&lt;br /&gt;
| file || x || file is executable - for programs&lt;br /&gt;
|-&lt;br /&gt;
| directory || r || contents of the directory can be read&lt;br /&gt;
|-&lt;br /&gt;
| directory || w || new files and dirs can be created in the directory&lt;br /&gt;
|-&lt;br /&gt;
| directory || x || directory can be entered&lt;br /&gt;
|}&lt;br /&gt;
Examples:&lt;br /&gt;
* &amp;quot;-rw-rw-r-- mathuser ugrad&amp;quot; for gftw.c - it is a file and the owner of the file (mathuser) and anyone in the ugrad group can both read and modify the file and everyone else on the system can read it.&lt;br /&gt;
* &amp;quot;-rw-r----- mathuser ugrad&amp;quot; for fftw3.c - it is a file and the owner of the file (mathuser) can read and modify the file, members of the ugrad group can read it and everyone else cannot access it in any way.&lt;br /&gt;
* &amp;quot;drwx------ mathuser ugrad&amp;quot; for newproject - it is a directory and only its owner (mathuser) can list its contents, create new files/dirs in it and enter it.&lt;br /&gt;
&lt;br /&gt;
=== chmod = change access rights ===&lt;br /&gt;
You have seen above how to view access rights and what they mean, you can modify them with chmod. The first parametar determines the rights and should be followed by file/directory names. Examples follow:&lt;br /&gt;
 [mathuser@math mathuser] ls -ld newproject/*.c newproject&lt;br /&gt;
 -rw-r-----   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 -rw-rw-r--   1 mathuser ugrad  111921 Feb  3 09:40 newproject/gftw.c&lt;br /&gt;
 drwx------   5 mathuser ugrad    4096 May 16 11:28 newproject&lt;br /&gt;
 [mathuser@math mathuser] chmod g+rwx newproject&lt;br /&gt;
 [mathuser@math mathuser] ls -ld newproject&lt;br /&gt;
 drwxrwx---   5 mathuser ugrad    4096 May 16 11:28 newproject&lt;br /&gt;
 [mathuser@math mathuser] chmod g-w newproject&lt;br /&gt;
 [mathuser@math mathuser] ls -ld newproject&lt;br /&gt;
 drwxr-x---   5 mathuser ugrad    4096 May 16 11:28 newproject&lt;br /&gt;
 [mathuser@math mathuser] chmod o-r newproject/*.c&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/*.c&lt;br /&gt;
 -rw-r-----   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 -rw-rw----   1 mathuser ugrad  111921 Feb  3 09:40 newproject/gftw.c&lt;br /&gt;
 [mathuser@math mathuser] chmod a+r newproject/*.c&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/*.c&lt;br /&gt;
 -rw-r--r--   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 -rw-rw-r--   1 mathuser ugrad  111921 Feb  3 09:40 newproject/gftw.c&lt;br /&gt;
 [mathuser@math mathuser] chmod -R og-rwx newproject&lt;br /&gt;
 [mathuser@math mathuser] ls -ld newproject/*.c newproject&lt;br /&gt;
 -rw-------   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 -rw-------   1 mathuser ugrad  111921 Feb  3 09:40 newproject/gftw.c&lt;br /&gt;
 drwx------   5 mathuser ugrad    4096 May 16 11:28 newproject&lt;br /&gt;
Logic is simple - you can add or remove rights for the user (u), group (g), others (o) or all (a).  E.g. g+rwx - add read, write and execute for the group, or a+r - add read for all, or g-w - deny write for the group.&lt;br /&gt;
&lt;br /&gt;
You can use &amp;quot;-R&amp;quot; option to apply the same rights to the directory and all of it contents.  Especially useful flag is X, e.g. &amp;quot;chmod -R a+X dir&amp;quot; will make all the directories (and any executable files) executable but it will not add executable right to files that do not have it already.&lt;br /&gt;
&lt;br /&gt;
=== ln = links ===&lt;br /&gt;
Besides files and directories you can also use links which allow you to access the same file or directory in different locations in the file system.  There are 2 types of links - symbolic links and hard links.  Hard links work only for files and allow you to have an identical file appear in mutliple places in the same filesystem.  In other words you can refer to the same file content with different file names and in different locations.  ln command creates links, its first parametar is the name of the file to which you want to create a link to and second parametar is the name of the link you are about to create:&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 -rw-------   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 ls: newproject2006/fftw4.c: No such file or directory&lt;br /&gt;
 [mathuser@math mathuser] ln newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 -rw-------   2 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 -rw-------   2 mathuser ugrad    2563 May 16 11:28 newproject2006/fftw4.c&lt;br /&gt;
 [mathuser@math mathuser] rm newproject/fftw3.c&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 ls: newproject/fftw3.c: No such file or directory&lt;br /&gt;
 -rw-------   1 mathuser ugrad    2563 May 16 11:28 newproject2006/fftw4.c&lt;br /&gt;
If, in this example, you were to edit fftw4.c and change it you would be able to see that fftw3.c would change as well and continue to be exactly the same as fftw4.c. Note also that the number between the file rights and owner of the file lists the number of hard links for that particular file - initially 1 but as soon as we created another hard link it became 2.  A file that is hard linked anywhere on the file system will only really be deleted when all the hard links are gone.&lt;br /&gt;
&lt;br /&gt;
Symbolic links are pointers to any other file or '''directory''' on the system rather than just different names for the same file.  Whenever you try to read/access a symbolic link the operating system will try to resolve it by starting at the directory where the symbolic link is located and show you the file or directory that the link points to.  It is very important to understand how symbolic links are resolved - again, from the directory they are located in.  E.g. take a look at the following example (that also shows that symbolic links can point to a file/dir that does not exist):&lt;br /&gt;
 [mathuser@math mathuser] ln -s ~/newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 -rw-------   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 lrwxrwxrwx   1 mathuser ugrad    2563 May 16 11:28 newproject2006/fftw4.c -&amp;gt; /home/mathuser/newproject/fftw3.c&lt;br /&gt;
 [mathuser@math mathuser] more newproject2006/fftw4.c&lt;br /&gt;
 .... contents ....&lt;br /&gt;
The above example creates a symbolic link with an absolute reference pointing to ~/newproject/fftw3.c and we verify it works.  Note that the listing of the link shows both that it is a link &amp;quot;lrwxrwxrwx&amp;quot;, the name of the link and to what it points to.  Now example on why to be careful:&lt;br /&gt;
 [mathuser@math mathuser] ln -s ~/newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 -rw-------   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 lrwxrwxrwx   1 mathuser ugrad    2563 May 16 11:28 newproject2006/fftw4.c -&amp;gt; newproject/fftw3.c&lt;br /&gt;
 [mathuser@math mathuser] more newproject2006/fftw4.c&lt;br /&gt;
 more: newproject2006/fftw4.c: No such file or directory&lt;br /&gt;
The system cannot find newproject2006/fftw4.c because the symbolic link is located in the directory newproject2006 and since it is not an absolute reference it tries to read newproject2006/newproject/fftw3.c that does not exist. An example how to create a working relative symbolic link follows:&lt;br /&gt;
 [mathuser@math mathuser] ln -s ../newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 [mathuser@math mathuser] ls -l newproject/fftw3.c newproject2006/fftw4.c&lt;br /&gt;
 -rw-------   1 mathuser ugrad    2563 May 16 11:28 newproject/fftw3.c&lt;br /&gt;
 lrwxrwxrwx   1 mathuser ugrad    2563 May 16 11:28 newproject2006/fftw4.c -&amp;gt; ../newproject/fftw3.c&lt;br /&gt;
 [mathuser@math mathuser] more newproject2006/fftw4.c&lt;br /&gt;
 .... contents ....&lt;br /&gt;
This time it works because the system tries to access newproject2006/../newproject/fftw3.c which does work.  Note that all kinds of problems can occur when you use symlinks and you start moving them and the files they point to so careful use of either relative or absolute links is required.&lt;br /&gt;
&lt;br /&gt;
Note again that you can only create symbolic links to directories, not hard links.&lt;br /&gt;
&lt;br /&gt;
== View files ==&lt;br /&gt;
=== cat = display full content ===&lt;br /&gt;
cat will display the full contents of a file, as in &amp;quot;cat newproject/fftw3.c&amp;quot;, without stopping after each screenfull or any other user interaction. &amp;lt;br&amp;gt;&lt;br /&gt;
cat can be used to concatenate multiple files, as in &amp;quot;cat data1 data2 data3 &amp;gt; alldata&amp;quot; which writes the first three files sequentially into a file called alldata.&amp;lt;br&amp;gt;&lt;br /&gt;
You can also use the &amp;gt;&amp;gt; pipe commands to append a file to another file, as in &amp;quot;cat potcar.H &amp;gt;&amp;gt; POTCAR&amp;quot; which will write potcar.H at the end of POTCAR.&lt;br /&gt;
&lt;br /&gt;
=== more/less = display files slowly ===&lt;br /&gt;
more/less commands will display file contents but only a screenfull at a time.  You can display the next page by pressing space, go back to previous page with &amp;quot;b&amp;quot;, search within the document after pressing &amp;quot;/&amp;quot; and so on. Commands more and less are similar in what they are intended to do but have different capabilities.  Man page of each can tell you more about key combinations they understand.&lt;br /&gt;
&lt;br /&gt;
== Editors ==&lt;br /&gt;
There are a couple of different editors you can consider using.&lt;br /&gt;
=== nano = simple, limited ===&lt;br /&gt;
nano is a very simple editor but one that cannot do much.  Start it up by giving it a file name you want to edit/create:&lt;br /&gt;
 [mathuser@math mathuser] nano newproject/fftw3.c&lt;br /&gt;
It understands few commands and they are all listed at the bottom of the screen.  E.g. &amp;quot;^X&amp;quot; means that you should press together control-X (to exit the nano program).&lt;br /&gt;
=== emacs = powerfull, complicated ===&lt;br /&gt;
emacs is a very powerfull, complicated and, if I may be forgiven, bloated editor that you can use to do anything from editing files, being your psychotherapist to reading your e-mails.  In particular it can help you with your work because it uses by default syntax highlighting of the document according to its type.  E.g. it will recognize a C document and have different colors for statements, comments and variables.  It will even try to help you match up braces ({ and }). Start it up by typing something like:&lt;br /&gt;
 [mathuser@math mathuser] emacs newproject/fftw3.c&lt;br /&gt;
Here are some of the basic commands that you can use in emacs.  You can get more detailed help in emacs itself, you can use &amp;quot;info emacs&amp;quot; or you can just find info on the net.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|+ Basic emacs commands&lt;br /&gt;
! Key combination !! action&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x Ctrl-c || quit emacs (if not saved you will be asked if you want to save the file you are working on)&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x Ctrl-s || save the document you are working on&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-g || interrupt, e.g. interrupt whatever other command you might've been trying to type&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x Ctrl-f || open a file (a new one)&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x Ctrl-v || open a file in place of the current one&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x Ctrl-b || list all open files/buffers&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x b || go to the next buffer&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x 2 || split the screen into 2&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x 1 || unsplit the screen and only leave the current window&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x 0 || switch to other window&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x u || undo (can be used multiple times)&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-x i || insert a file at the current location of the cursor&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-s || search forwards&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-r || search backwards&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-k || delete to the end of the current line&lt;br /&gt;
|-&lt;br /&gt;
| Ctrl-y || paste recently deleted text&lt;br /&gt;
|-&lt;br /&gt;
| Esc x doctor || talk to the psychotherapist (you'll need it after working on your thesis for too long...)&lt;br /&gt;
|}&lt;br /&gt;
These are just scratching the surface...&lt;br /&gt;
&lt;br /&gt;
=== vim/vi = powerfull, unintuitive, fast ===&lt;br /&gt;
vim is a clone of vi, a traditional Unix editor. It is fast, efficient and capable - e.g. it also has syntax highlighting.  It is more cryptic and less intuitive to get going initially but rewarding if you stick with it.&lt;br /&gt;
&lt;br /&gt;
Unlike most other editors in vi you cannot immediately start editing - that's because vi can be in a couple of different modes.  You are initially placed in &amp;quot;command mode&amp;quot; where you can enter various commands and move the cursor through the file.  E.g. if after moving the cursor to where you want to start typing you press i you will move to &amp;quot;insert mode&amp;quot; and only then you can start your editing.  To exit various editing modes you can press the escape key.  Note that in the following table of basic commands all the commands have to be issued in the command mode:&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
|+ Basic vim/vi commands (almost all work only in command mode)&lt;br /&gt;
! Command (case matters) !! action&lt;br /&gt;
|-&lt;br /&gt;
| Esc || exit the current mode and return to command mode&lt;br /&gt;
|-&lt;br /&gt;
| i || insert mode - start inserting text at the location of the cursor&lt;br /&gt;
|-&lt;br /&gt;
| a || append/insert mode - like insert but after the location of the cursor&lt;br /&gt;
|-&lt;br /&gt;
| R || replace mode - overwrite existing text, do not insert&lt;br /&gt;
|-&lt;br /&gt;
| x || delete a character under the cursor&lt;br /&gt;
|-&lt;br /&gt;
| D || delete until the end of the line&lt;br /&gt;
|-&lt;br /&gt;
| / || start a forward search&lt;br /&gt;
|-&lt;br /&gt;
| ? || start a backward search&lt;br /&gt;
|-&lt;br /&gt;
| :w || save the file&lt;br /&gt;
|-&lt;br /&gt;
| :q || quit vi&lt;br /&gt;
|-&lt;br /&gt;
| :wq || save and quit&lt;br /&gt;
|-&lt;br /&gt;
| . || redo last action&lt;br /&gt;
|-&lt;br /&gt;
| b || move cursor a word back&lt;br /&gt;
|-&lt;br /&gt;
| w || move cursor a word forward&lt;br /&gt;
|-&lt;br /&gt;
| { || move cursor to the beginning of the previous paragraph&lt;br /&gt;
|-&lt;br /&gt;
| } || move cursor to the beginning of the next paragraph&lt;br /&gt;
|-&lt;br /&gt;
| G || move to the end of the file&lt;br /&gt;
|-&lt;br /&gt;
| gg || move to the beginning of the file&lt;br /&gt;
|-&lt;br /&gt;
| 23G || move to the 23rd line in the file&lt;br /&gt;
|-&lt;br /&gt;
| dd || delete the whole line&lt;br /&gt;
|-&lt;br /&gt;
| db || delete previous word&lt;br /&gt;
|-&lt;br /&gt;
| d} || delete next paragraph&lt;br /&gt;
|-&lt;br /&gt;
| yy || copy the present line into buffer&lt;br /&gt;
|-&lt;br /&gt;
| yw || copy the next word into buffer&lt;br /&gt;
|-&lt;br /&gt;
| yG || copy all the tex from the cursor to the end of the file into buffer&lt;br /&gt;
|-&lt;br /&gt;
| p || paste text from buffer after current line&lt;br /&gt;
|-&lt;br /&gt;
| P || paste text from buffer before current line&lt;br /&gt;
|-&lt;br /&gt;
| o || insert a new line after the current one&lt;br /&gt;
|-&lt;br /&gt;
| O || insert a new line before the corrent one&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
Note how certain commands, like d or y, can be combined with movement commands.  E.g. dG would delete everything from the cursor location until the end of the document.&lt;br /&gt;
&lt;br /&gt;
===gedit=familiar interface, very useful===&lt;br /&gt;
It looks and behaves like wordpad (the upgraded version of notepad)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== grep, find, locate = looking for files and strings ==&lt;br /&gt;
There will be a time when you won't remember where did you implement that particular function or you will need to find out which system library contains the function that prevents your program from successfully linking or just where is that file you know you have somewhere among thousands of files.  So let's see how to search.&lt;br /&gt;
&lt;br /&gt;
=== grep = find strings in files ===&lt;br /&gt;
grep can be used to find strings in files.  It expects a strings that it is going to be looking for and a list of files where to look for the string. For example:&lt;br /&gt;
 [mathuser@math mathuser] grep integrate_partial *.c *.h&lt;br /&gt;
 integrate.h: void integrate_partial_one(&lt;br /&gt;
 integrate.h: int integrate_partial(&lt;br /&gt;
 integrate_helper.c: void integrate_partial_one(INTEGRAL *to_integrate, int attempt) {&lt;br /&gt;
 integrate.c: int integrate_partial(INTEGRAL *to_integrate) {&lt;br /&gt;
looks for integrate_partial in all files ending with .c or .h.  It first prints the file name where it found it as well as the line that contains the string.  Note that it matched both integrate_partial_one and integrate_partial.&lt;br /&gt;
&lt;br /&gt;
By default grep is case sensitive but you can specify -i option to make it ignore the case.  You can also tell grep to look recursively through a directory. More examples:&lt;br /&gt;
 [mathuser@math mathuser] grep -ri integrate_partial .&lt;br /&gt;
 integrate.h: void integrate_partial_one(&lt;br /&gt;
 integrate.h: int integrate_partial(&lt;br /&gt;
 integrate.h: #define INTEGRATE_PARTIAL_ATTEMPTS 100&lt;br /&gt;
 integrate_helper.c: void integrate_partial_one(INTEGRAL *to_integrate, int attempt) {&lt;br /&gt;
 integrate_helper.c:                      if (attempt &amp;lt; INTEGRATE_PARTIAL_ATTEMPTS) {&lt;br /&gt;
 integrate.c: int integrate_partial(INTEGRAL *to_integrate) {&lt;br /&gt;
 Binary file compile_intel/integrate_helper.o matches&lt;br /&gt;
 Binary file compile_intel/integrate.o matches&lt;br /&gt;
 Binary file compile_gcc4/integrate_helper.o matches&lt;br /&gt;
 Binary file compile_gcc4/integrate.o matches&lt;br /&gt;
here we ignored the case (compare with previous example) and we also instructed grep to search recursively through &amp;quot;.&amp;quot; i.e. the current directory.  Also note that when binary files match grep will only tell you so but, they being binary, it cannot show you any content.  Even then it is still useful.  For example if you are unsure which library you need to link in to satisfy a dependency for pthread_join function you could do:&lt;br /&gt;
 [mathuser@math mathuser] grep pthread_join /lib64/lib* /lib/lib*&lt;br /&gt;
 Binary file /lib64/libpthread-0.10.so matches&lt;br /&gt;
 Binary file /lib64/libpthread.so.0 matches&lt;br /&gt;
 Binary file /lib/libpthread-0.10.so matches&lt;br /&gt;
 Binary file /lib/libpthread.so.0 matches&lt;br /&gt;
Note that I've searched through /lib64 and /lib - that is because hydra is a 64bit machine and has both 64 bit and 32 bit libraries.&lt;br /&gt;
&lt;br /&gt;
You can also do invert the match - i.e. select non-matching lines - by using &amp;quot;-v&amp;quot; option.  You can find an example of that below in &amp;quot;pipe&amp;quot; section.&lt;br /&gt;
&lt;br /&gt;
=== find = find files ===&lt;br /&gt;
find command can be used to search for files and directories.  It is an extremely complex command and we will just mention a few simple commands as well as give a more complex example just to show what it is capable of.  Please check the main page for more options.  Examples:&lt;br /&gt;
 [mathuser@math mathuser] find ~ -name fftw3.c&lt;br /&gt;
looks for a file called exactly fftw3.c in mathuser home directory.&lt;br /&gt;
 [mathuser@math mathuser] find newproject -name \*.c&lt;br /&gt;
looks for all files that end with .c in newproject subdirectory.  Here we used a star (*) because we wanted all files ending with .c but note that we had to &amp;quot;escape it&amp;quot;, i.e. prefix it with &amp;quot;\&amp;quot;.  That is so because usually the shell tries to expand such entries before it gives them over to find command.  Therefore if there was a .c file in the current working directory, say fftw2.c and fftw3.c, and we didn't use &amp;quot;\*.c&amp;quot; but just &amp;quot;*.c&amp;quot; then shell would end up executing &amp;quot;find newproject -name fftw2.c fftw3.c&amp;quot;.  That would fail but in other situations it might seem to work but not give the desired result.&lt;br /&gt;
&lt;br /&gt;
If you want to ignore case you can use iname option:&lt;br /&gt;
 [mathuser@math mathuser] find newproject -iname \*.c&lt;br /&gt;
And here is a more complicated example that looks for a cxx file that has integrate_string in it and if it finds it then it will remove that file:&lt;br /&gt;
 [mathuser@math mathuser] find newproject -name \*.cxx -exec grep -q integrate_string &amp;quot;{}&amp;quot; \; -exec rm -f &amp;quot;{}&amp;quot; \;&lt;br /&gt;
&lt;br /&gt;
=== locate = find files on the system ===&lt;br /&gt;
The system maintains a database of files and their locations that you can quickly search through, e.g.:&lt;br /&gt;
 [mathuser@math mathuser] locate libnss_dns-2.3.2.so&lt;br /&gt;
 /opt/teamhpc/node-ssi/rootfs/lib64/libnss_dns-2.3.2.so&lt;br /&gt;
 /opt/pathscale-1.4/x86_64-pathscale-linux/lib/libnss_dns-2.3.2.so&lt;br /&gt;
 /opt/pathscale-1.4/x86_64-pathscale-linux/lib64/libnss_dns-2.3.2.so&lt;br /&gt;
 /opt/pathscale-2.0/x86_64-pathscale-linux/lib/libnss_dns-2.3.2.so&lt;br /&gt;
 /opt/pathscale-2.0/x86_64-pathscale-linux/lib64/libnss_dns-2.3.2.so&lt;br /&gt;
 /lib64/libnss_dns-2.3.2.so&lt;br /&gt;
 /lib/libnss_dns-2.3.2.so&lt;br /&gt;
but, unfortunately, it will not search through your home directory (/home or /storage) so it can only be used for files on the local system.&lt;br /&gt;
&lt;br /&gt;
== pipe = combining commands ==&lt;br /&gt;
One of the great strengths of Linux shell command line is that you can combine commands by processing the output of one commands with other commands by using pipe &amp;quot;|&amp;quot;.  For example:&lt;br /&gt;
 [mathuser@math mathuser] locate libnss_dns-2.3.2.so | grep -v pathscale&lt;br /&gt;
 /opt/teamhpc/node-ssi/rootfs/lib64/libnss_dns-2.3.2.so&lt;br /&gt;
 /lib64/libnss_dns-2.3.2.so&lt;br /&gt;
 /lib/libnss_dns-2.3.2.so&lt;br /&gt;
here we pipe the output of locate's search for libnss_dns-2.3.2.so to grep which does a reverse match for pathscale (i.e. excludes anything that matches pathscale string) and prints it out.&lt;br /&gt;
&lt;br /&gt;
== redirecting &amp;gt;&amp;gt; data ==&lt;br /&gt;
You can redirect data with the &amp;gt; and &amp;gt;&amp;gt; arrows. The single arrow will write data to a new file (it must not exist). Type &amp;quot;ll &amp;gt; file_list&amp;quot; to create a file composed of the names of all files in your current directory. Double arrows will redirect the output by appending it to the end of a file. Type &amp;quot;date &amp;gt;&amp;gt; file1&amp;quot; to have the current date and time written to the end of file1.&lt;br /&gt;
&lt;br /&gt;
== Finding documentation and help on the system itself ==&lt;br /&gt;
Most of the system commands and much of the software that is installed on the system comes with documentation. That includes man pages, info documents and all kinds of other documents (pdf, txt, postscript and others).  Here you will find hints on how to find it all.&lt;br /&gt;
&lt;br /&gt;
=== Ask for help from the command itself ===&lt;br /&gt;
Many commands can give you their basic options and usage if you ask for it with -h or --help option.  E.g. try some of the following examples:&lt;br /&gt;
 [mathuser@math mathuser] cp --help&lt;br /&gt;
 [mathuser@math mathuser] man -h&lt;br /&gt;
&lt;br /&gt;
=== man pages ===&lt;br /&gt;
Man (manual) pages are included with much of the software that is installed on the system and almost all of the system commands have a man page that describes how to use that command.  E.g. to view a man page for cp command you would type:&lt;br /&gt;
 [mathuser@math mathuser] man cp&lt;br /&gt;
and on that page you will find detailed information on the cp command, including all the arguments that it can be used with.  Note that, just like with more/less, you can search through the man page you are looking it as well as scroll back and forth.&lt;br /&gt;
&lt;br /&gt;
Man pages also contain documentation for various system calls and some operating system features.&lt;br /&gt;
&lt;br /&gt;
You can also search through descriptions of man pages (&amp;quot;man -k ...&amp;quot;) which might help if you are looking for a specific command and cannot remember it exactly.  For example if you were looking for DNS related man pages:&lt;br /&gt;
 [mathuser@math mathuser] man -k DNS&lt;br /&gt;
 dig                  (1)  - DNS lookup utility&lt;br /&gt;
 dnsdomainname [hostname] (1)  - show the system's DNS domain name&lt;br /&gt;
 dnssec-keygen        (8)  - DNSSEC key generation tool&lt;br /&gt;
 dnssec-makekeyset    (8)  - DNSSEC zone signing tool&lt;br /&gt;
 dnssec-signkey       (8)  - DNSSEC key set signing tool&lt;br /&gt;
 dnssec-signzone      (8)  - DNSSEC zone signing tool&lt;br /&gt;
 host                 (1)  - DNS lookup utility&lt;br /&gt;
 nsupdate             (8)  - Dynamic DNS update utility&lt;br /&gt;
Note also the number between brackets - it determines the man page section in which the relevant man page is located.  For example to get nsupdate man page from section 8 you could request it with&lt;br /&gt;
 [mathuser@math mathuser] man 8 nsupdate&lt;br /&gt;
&lt;br /&gt;
=== Info manuals ===&lt;br /&gt;
Some programs also have info manuals.  They are generally much more comprehensive than man pages but fewer programs have them.  For example you can find extensive info manual pages on gcc and emacs, e.g.:&lt;br /&gt;
 [mathuser@math mathuser] info gcc&lt;br /&gt;
Press &amp;quot;?&amp;quot; while viewing an info manual to get information on commands you can use with info program.&lt;br /&gt;
&lt;br /&gt;
=== /usr/share/doc and other docs ===&lt;br /&gt;
A lot of programs install together with additional information in various formats.  Most of it is located in /usr/share/doc in a subdirectory named after program's name and version.  For example you can find additional octave documentation under &amp;quot;/usr/share/doc/octave-2.1.57/&amp;quot; (or a similar location - might change if we upgrade octave to a newer version).  Programs installed under /opt might have documentation there too.  For example pathscale has some documentation under &amp;quot;/opt/pathscale/share/doc/pathscale-compilers-2.4&amp;quot; - again, will change when pathscale compiler is upgraded.&lt;br /&gt;
&lt;br /&gt;
== Other Tutorials ==&lt;br /&gt;
These are just some of numerous tutorials you can find on the net.  They are all better and more detailed then the above introduction but they also cover a lot of ground that you may not need or care about at this point.&lt;br /&gt;
* [http://www.tldp.org/LDP/gs/node5.html Linux Tutorial from the Linux Documentation Project]&lt;br /&gt;
* [http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/step-guide/ Red Hat Step by step guide]&lt;br /&gt;
* [http://www.gnu.org/software/bash/manual/bashref.html BASH Commands] (useful for writing scripts)&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Help:Contents&amp;diff=1759</id>
		<title>Help:Contents</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Help:Contents&amp;diff=1759"/>
		<updated>2007-09-27T14:16:57Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In order to contact Math/PACM computing support please e-mail [mailto:compudoc@princeton.edu compudoc@princeton.edu] or call 8-0476.&lt;br /&gt;
&lt;br /&gt;
Josko Plazonic&lt;br /&gt;
&lt;br /&gt;
222 Fine Hall&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
John Vincent&lt;br /&gt;
&lt;br /&gt;
209 Fine Hall&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1758</id>
		<title>Documentation and Information</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1758"/>
		<updated>2007-09-27T14:10:14Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find documentation and information relevant to computing systems at Fine Hall.  If you are looking for specific instructions on how to perform certain tasks then you are more likely to find what you are looking for on [[HowTos]] and [[Frequently_Asked_Questions|Frequently Asked Questions]] pages.&lt;br /&gt;
&lt;br /&gt;
== Introductions ==&lt;br /&gt;
* [[Documentation_and_Information:Getting started with Linux|Getting started with Linux]]&lt;br /&gt;
&lt;br /&gt;
== Computational Resources ==&lt;br /&gt;
* [[Documentation_and_Information:Computational clusters in Fine Hall|Computational clusters in Fine Hall]]&lt;br /&gt;
* [[Documentation_and_Information:Computationally related software|Computationally related software]]&lt;br /&gt;
* [[Documentation_and_Information:Compilers and SGE|How to use compilers and the SGE scheduling software]]&lt;br /&gt;
&lt;br /&gt;
== Printers ==&lt;br /&gt;
* [[Documentation_and_Information:Public printers|Publicly accessible printers]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1757</id>
		<title>Documentation and Information</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information&amp;diff=1757"/>
		<updated>2007-09-27T14:08:35Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Computational Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find documentation and information relevant to computing systems at Fine Hall.  If you are looking for specific instructions on how to perform certain tasks then you are more likely to find what you are looking for on [[HowTos]] and [[Frequently_Asked_Questions|Frequently Asked Questions]] pages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Computational Resources ==&lt;br /&gt;
* [[Documentation_and_Information:Computational clusters in Fine Hall|Computational clusters in Fine Hall]]&lt;br /&gt;
* [[Documentation_and_Information:Computationally related software|Computationally related software]]&lt;br /&gt;
* [[Documentation_and_Information:Compilers and SGE|How to use compilers and the SGE scheduling software]]&lt;br /&gt;
&lt;br /&gt;
== Printers ==&lt;br /&gt;
* [[Documentation_and_Information:Public printers|Publicly accessible printers]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1756</id>
		<title>Documentation and Information:Computational clusters in Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Computational_clusters_in_Fine_Hall&amp;diff=1756"/>
		<updated>2007-09-26T20:45:49Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Comp computational cluster */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Fine Hall machine room is currently hosting 3 different computational clusters:&lt;br /&gt;
&lt;br /&gt;
== Comp computational cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
Comp cluster is an older cluster consisting of 16 single CPU AMD Athlon machines with speeds around 1.6Ghz and memory per node ranging from 1GB to 512MB.  Nodes are connected together with 100Mb ethernet networking and have 20GB-40GB hard drives.  &lt;br /&gt;
&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories and run the same operating system version as the rest of Fine Hall Linux machines - PU_IAS/Elders 5 Linux (clone of RHEL5).  The software set also closely matches Fine Hall Linux workstations though some of the graphical/desktop applications with no computational use have not been installed.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas.  It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results.  /tmp and /scratch are '''NOT''' backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
The scheduling software used on the cluster is the Sun Grid Engine.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
This cluster is fully accessible to all members of Math/PACM.  &lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
Comp cluster's head node name is comp (comp01).  You can connect to it with ssh but only from '''math.princeton.edu''' and '''pacm.princeton.edu'''.  E.g. '''&amp;lt;tt&amp;gt;ssh comp&amp;lt;/tt&amp;gt;'''.  &lt;br /&gt;
&lt;br /&gt;
== How to Use ==&lt;br /&gt;
No computations/jobs should be ran on the cluster without the use of the scheduling software, SGE.  Any jobs not using SGE might be removed at any time.&lt;br /&gt;
&lt;br /&gt;
== Macomp computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
MaComp computational cluster consists of 26 dual Opteron 248 nodes (2.2Ghz operating frequency).  Master node is equipped with 8GB of memory and the nodes with 2GB each.  Nodes are connected with gigabit ethernet networking and have 120GB IDE hard drives.&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is integrated into Fine Hall Math/PACM network and all the cluster machines mount Math/PACM home directories.  The operating system used on these machines is a clone of RHEL 3.&lt;br /&gt;
&lt;br /&gt;
For temporary storage, besides /tmp, one can use also /scratch - with no quotas. It must be emphasized that both /scratch and /tmp cannot be used for permanent data storage and no crucial data should be stored there, e.g. use it for intermediate computational results. /tmp and /scratch are NOT backed up and can be erased at any time, especially if a reinstall of one or more machines is required or if one of these directories is full and other users need space. /tmp is also regularly cleaned up by a system job and any file in /tmp that hasn't been accessed in last 10 days will be deleted.&lt;br /&gt;
&lt;br /&gt;
The scheduling software used is Sun's Grid Engine version 6.0 and all the jobs '''have to''' be submitted with SGE.  Once logged in please check &amp;lt;tt&amp;gt;/usr/finehall/computing/sge/samples/readme.txt&amp;lt;/tt&amp;gt; for basic instructions on how to submit jobs to SGE and in particular how to submit MPIch jobs. You can find sample submission scripts in &amp;lt;tt&amp;gt;/usr/finehall/computing/sge/samples&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Access ===&lt;br /&gt;
At this time access is restricted to grant applicants/contributers.&lt;br /&gt;
&lt;br /&gt;
=== How to connect ===&lt;br /&gt;
In order to connect to MaComp cluster you will have to login first to &amp;lt;tt&amp;gt;math.princeton.edu&amp;lt;/tt&amp;gt; and from there you can:&lt;br /&gt;
 ssh macomp&lt;br /&gt;
Login should proceed without the need to enter any passwords.  If you are denied access or asked for a password then your account has not yet been allowed access to the cluster.&lt;br /&gt;
== Wiffin computing cluster ==&lt;br /&gt;
=== Description ===&lt;br /&gt;
Wiffin computational cluster consists of 20 dual Xeon 2.2Ghz nodes.  Half of the nodes have 2GB and the other half 4GB memory.  Nodes are connected with gigabit ethernet networking.&lt;br /&gt;
&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
The cluster is running a version of RedHat Linux.&lt;br /&gt;
&lt;br /&gt;
Scheduling software used is Sun's Grid Engine version 5.3 and all the jobs '''have to''' be submitted with SGE.&lt;br /&gt;
&lt;br /&gt;
=== Access ===&lt;br /&gt;
The access to this cluster is restricted to members of Prof. Emily Carter's research group and it is not otherwise part of Fine Hall network of Math/PACM Linux machines.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1755</id>
		<title>HowTos</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1755"/>
		<updated>2007-06-15T20:38:10Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: remove instructions about our ssl certificate&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find instructions on how to do some of the more common computing tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Certificates ==&lt;br /&gt;
We used to rely on our own unsigned SSL certificates for Math web servers and e-mail but as of recently we have replaced them with [http://certs.ipsca.com/ ipsCA]'s signed certificates.  ipsCA is generously providing high quality free SSL certificates to educational institutions.  &lt;br /&gt;
&lt;br /&gt;
All recent browsers and e-mail clients have appropriate root certificate that can be used to verify identity of our servers.  Therefore no additional importing of certificates should be required. If you encounter any problems with our SSL certificates please let us know (like if your browser or e-mail client cannot recognize or verify our SSL certificate).&lt;br /&gt;
&lt;br /&gt;
== Connect to Math/PACM systems remotely ==&lt;br /&gt;
There are a number of different ways to access Math/PACM systems and services - login servers, computational machines, E-mail, files on file server and others.  Here are some of these ways:&lt;br /&gt;
* [[HowTos:Access your files on Math/PACM file server via cifs/samba|Access your files on Math/PACM file server via cifs/samba on Windows, Mac OS X or Linux]] - directly access your files on the file server, on campus or after connecting via VPN&lt;br /&gt;
* [[HowTos:Connect to login servers via ssh|Connect to login servers via ssh from Windows, Mac OS X or Linux]] (also copy files back and forth by using ssh/scp)&lt;br /&gt;
* [[HowTos:Remote Linux Desktop access|Remote Linux Desktop access]]&lt;br /&gt;
For E-mail reading/access only please read below.&lt;br /&gt;
&lt;br /&gt;
== E-mail access and configuration ==&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird on Math Linux machines|Configure Thunderbird on Math Linux workstations]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird|Configure Thunderbird or Mozilla in general]]&lt;br /&gt;
* [[HowTos:Read E-mail with webmail|Read your e-mail in your web browser by using Horde/IMP webmail]]&lt;br /&gt;
&lt;br /&gt;
== File restore/undelete/backup/snapshots ==&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Linux from home directory on Math file server|How to restore deleted files or previous versions on Linux from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Windows from home directory on Math file server|How to restore deleted files or previous versions on Windows from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Mac OS X from home directory on Math file server|How to restore deleted files or previous versions on Mac OS X from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from backups|How to obtain files from backups]] (for files deleted or changed more than 4 days ago and usually not more than 3-4 months ago)&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
* [[HowTos:Configure MacOSX for Dell W5300n|How to configure your Macintosh for printing with the Dell printers on 11th and 5th floor (W5300n)]]&lt;br /&gt;
* [[HowTos:Configure Windows Printing for Fine Hall|How to configure your Microsoft Windows computer for printing to public printers in Fine Hall]]&lt;br /&gt;
&lt;br /&gt;
== TeX ==&lt;br /&gt;
* [[HowTos:Install TeX on a Microsoft Windows computer|A quick HowTo about installing TeX on a Microsoft Windows computer]]&lt;br /&gt;
* [[HowTos:Add TeX to your webpage|How to add good looking TeX code to your webpages on Math webserver]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1754</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1754"/>
		<updated>2007-03-08T15:29:05Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using jsMath.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the jsMath package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
jsMath is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about jsMath on [http://www.math.union.edu/~dpvc/jsMath/ jsMath homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
                &amp;lt;STYLE&amp;gt; #jsMath_Warning {display: none} &amp;lt;/STYLE&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                jsMath = {&lt;br /&gt;
                        Autoload: {&lt;br /&gt;
                                loadFonts: [&amp;quot;msam10&amp;quot;,&amp;quot;msbm10&amp;quot;],&lt;br /&gt;
                                findTeXstrings: 0,      // 1 to look for any tex-delimited math&lt;br /&gt;
                                findLaTeXstrings: 1     // 1 to look for \(...\) and \[...\] only&lt;br /&gt;
                        }&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT SRC=&amp;quot;/jsMath/plugins/autoload.js&amp;quot;&amp;gt;&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                window.onload = function () {&lt;br /&gt;
                        jsMath.Autoload.Check();&lt;br /&gt;
                        jsMath.Process(document);&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure jsMath is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
By modifying findTeXstrigs:0 to findTeXstrings:1 you could also make jsMath interpret correctly formulas delimited with $ ... $ and $$ ... $$ but you should then be careful how you use $ symbol in your webpages (just like you have to be careful with it in a real TeX/LaTeX document).&lt;br /&gt;
== Control Panel ==&lt;br /&gt;
If you look carefully to the bottom right corner of this webpage you will notice a tiny icon saying jsMath.  That's the jsMath control panel and by clicking on it you can get tweak a few otions.&lt;br /&gt;
== Printing jsMath webpages ==&lt;br /&gt;
If you print the webpages you may notice that symbols look jagged and in low resolution (plus you might have a big warning on the top of the page instructing you to use HiRes fonts).  To produce a better printout first click on &amp;quot;HiRes Fonts for printing&amp;quot; button in the jsMath control panel and then proceed to print your webpage.&lt;br /&gt;
== Advanced Use ==&lt;br /&gt;
For more advanced use please check the [http://www.math.union.edu/~dpvc/jsMath/ jsMath homepage].  All of the plugins and fonts are installed on math webservers so they can be used immediately. jsMath is installed under /jsMath (or /jsmath) as shown in above examples.&lt;br /&gt;
&lt;br /&gt;
== Advanced Examples ==&lt;br /&gt;
This is taken from one of jsMath examples:&lt;br /&gt;
\[&lt;br /&gt;
\det\left|\,\matrix{&lt;br /&gt;
c_0 &amp;amp; c_1 &amp;amp; c_2 &amp;amp; \ldots &amp;amp; c_{n\phantom{+1}}\cr&lt;br /&gt;
c_1 &amp;amp; c_2 &amp;amp; c_3 &amp;amp; \ldots &amp;amp; c_{n+1}\cr&lt;br /&gt;
c_2 &amp;amp; c_3 &amp;amp; c_4 &amp;amp; \ldots &amp;amp; c_{n+2}\cr&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \cr&lt;br /&gt;
c_n &amp;amp; c_{n+1} &amp;amp;  c_{n+2} &amp;amp; \ldots &amp;amp; c_{2n}} \right| &amp;gt; 0&lt;br /&gt;
\]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1753</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1753"/>
		<updated>2007-03-08T15:15:46Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using jsMath.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the jsMath package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
jsMath is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about jsMath on [http://www.math.union.edu/~dpvc/jsMath/ jsMath homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
                &amp;lt;STYLE&amp;gt; #jsMath_Warning {display: none} &amp;lt;/STYLE&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                jsMath = {&lt;br /&gt;
                        Autoload: {&lt;br /&gt;
                                loadFonts: [&amp;quot;msam10&amp;quot;,&amp;quot;msbm10&amp;quot;],&lt;br /&gt;
                                findTeXstrings: 0,      // 1 to look for any tex-delimited math&lt;br /&gt;
                                findLaTeXstrings: 1     // 1 to look for \(...\) and \[...\] only&lt;br /&gt;
                        }&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT SRC=&amp;quot;/jsMath/plugins/autoload.js&amp;quot;&amp;gt;&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                window.onload = function () {&lt;br /&gt;
                        jsMath.Autoload.Check();&lt;br /&gt;
                        jsMath.Process(document);&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure jsMath is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
By modifying findTeXstrigs:0 to findTeXstrings:1 you could also make jsMath interpret correctly formulas delimited with $ ... $ and $$ ... $$ but you should then be careful how you use $ symbol in your webpages (just like you have to be careful with it in a real TeX/LaTeX document).&lt;br /&gt;
== Control Panel ==&lt;br /&gt;
If you look carefully to the bottom right corner of this webpage you will notice a tiny icon saying jsMath.  That's the jsMath control panel and by clicking on it you can get tweak a few otions.&lt;br /&gt;
== Advanced Use ==&lt;br /&gt;
For more advanced use please check the [http://www.math.union.edu/~dpvc/jsMath/ jsMath homepage].  All of the plugins and fonts are installed on math webservers so they can be used immediately. jsMath is installed under /jsMath (or /jsmath) as shown in above examples.&lt;br /&gt;
&lt;br /&gt;
== Advanced Examples ==&lt;br /&gt;
This is taken from one of jsMath examples:&lt;br /&gt;
\[&lt;br /&gt;
\det\left|\,\matrix{&lt;br /&gt;
c_0 &amp;amp; c_1 &amp;amp; c_2 &amp;amp; \ldots &amp;amp; c_{n\phantom{+1}}\cr&lt;br /&gt;
c_1 &amp;amp; c_2 &amp;amp; c_3 &amp;amp; \ldots &amp;amp; c_{n+1}\cr&lt;br /&gt;
c_2 &amp;amp; c_3 &amp;amp; c_4 &amp;amp; \ldots &amp;amp; c_{n+2}\cr&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \cr&lt;br /&gt;
c_n &amp;amp; c_{n+1} &amp;amp;  c_{n+2} &amp;amp; \ldots &amp;amp; c_{2n}} \right| &amp;gt; 0&lt;br /&gt;
\]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1752</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1752"/>
		<updated>2007-03-08T15:13:53Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: url fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using jsMath.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the jsMath package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
jsMath is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about jsMath on [http://www.math.union.edu/~dpvc/jsMath/ jsMath homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
                &amp;lt;STYLE&amp;gt; #jsMath_Warning {display: none} &amp;lt;/STYLE&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                jsMath = {&lt;br /&gt;
                        Autoload: {&lt;br /&gt;
                                loadFonts: [&amp;quot;msam10&amp;quot;,&amp;quot;msbm10&amp;quot;],&lt;br /&gt;
                                findTeXstrings: 0,      // 1 to look for any tex-delimited math&lt;br /&gt;
                                findLaTeXstrings: 1     // 1 to look for \(...\) and \[...\] only&lt;br /&gt;
                        }&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT SRC=&amp;quot;/jsMath/plugins/autoload.js&amp;quot;&amp;gt;&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                window.onload = function () {&lt;br /&gt;
                        jsMath.Autoload.Check();&lt;br /&gt;
                        jsMath.Process(document);&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure jsMath is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
By modifying findTeXstrigs:0 to findTeXstrings:1 you could also make jsMath interpret correctly formulas delimited with $ ... $ and $$ ... $$ but you should then be careful how you use $ symbol in your webpages (just like you have to be careful with it in a real TeX/LaTeX document).&lt;br /&gt;
&lt;br /&gt;
== Advanced Use ==&lt;br /&gt;
For more advanced use please check the [http://www.math.union.edu/~dpvc/jsMath/ jsMath homepage].  All of the plugins and fonts are installed on math webservers so they can be used immediately. jsMath is installed under /jsMath (or /jsmath) as shown in above examples.&lt;br /&gt;
&lt;br /&gt;
== Advanced Examples ==&lt;br /&gt;
This is taken from one of jsMath examples:&lt;br /&gt;
\[&lt;br /&gt;
\det\left|\,\matrix{&lt;br /&gt;
c_0 &amp;amp; c_1 &amp;amp; c_2 &amp;amp; \ldots &amp;amp; c_{n\phantom{+1}}\cr&lt;br /&gt;
c_1 &amp;amp; c_2 &amp;amp; c_3 &amp;amp; \ldots &amp;amp; c_{n+1}\cr&lt;br /&gt;
c_2 &amp;amp; c_3 &amp;amp; c_4 &amp;amp; \ldots &amp;amp; c_{n+2}\cr&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \cr&lt;br /&gt;
c_n &amp;amp; c_{n+1} &amp;amp;  c_{n+2} &amp;amp; \ldots &amp;amp; c_{2n}} \right| &amp;gt; 0&lt;br /&gt;
\]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1751</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1751"/>
		<updated>2007-03-08T15:13:37Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: hyperlink fix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using jsMath.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the jsMath package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
jsMath is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about jsMath on [http://www.math.union.edu/~dpvc/jsMath/|jsMath homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
                &amp;lt;STYLE&amp;gt; #jsMath_Warning {display: none} &amp;lt;/STYLE&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                jsMath = {&lt;br /&gt;
                        Autoload: {&lt;br /&gt;
                                loadFonts: [&amp;quot;msam10&amp;quot;,&amp;quot;msbm10&amp;quot;],&lt;br /&gt;
                                findTeXstrings: 0,      // 1 to look for any tex-delimited math&lt;br /&gt;
                                findLaTeXstrings: 1     // 1 to look for \(...\) and \[...\] only&lt;br /&gt;
                        }&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT SRC=&amp;quot;/jsMath/plugins/autoload.js&amp;quot;&amp;gt;&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                window.onload = function () {&lt;br /&gt;
                        jsMath.Autoload.Check();&lt;br /&gt;
                        jsMath.Process(document);&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure jsMath is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
By modifying findTeXstrigs:0 to findTeXstrings:1 you could also make jsMath interpret correctly formulas delimited with $ ... $ and $$ ... $$ but you should then be careful how you use $ symbol in your webpages (just like you have to be careful with it in a real TeX/LaTeX document).&lt;br /&gt;
&lt;br /&gt;
== Advanced Use ==&lt;br /&gt;
For more advanced use please check the [http://www.math.union.edu/~dpvc/jsMath/ jsMath homepage].  All of the plugins and fonts are installed on math webservers so they can be used immediately. jsMath is installed under /jsMath (or /jsmath) as shown in above examples.&lt;br /&gt;
&lt;br /&gt;
== Advanced Examples ==&lt;br /&gt;
This is taken from one of jsMath examples:&lt;br /&gt;
\[&lt;br /&gt;
\det\left|\,\matrix{&lt;br /&gt;
c_0 &amp;amp; c_1 &amp;amp; c_2 &amp;amp; \ldots &amp;amp; c_{n\phantom{+1}}\cr&lt;br /&gt;
c_1 &amp;amp; c_2 &amp;amp; c_3 &amp;amp; \ldots &amp;amp; c_{n+1}\cr&lt;br /&gt;
c_2 &amp;amp; c_3 &amp;amp; c_4 &amp;amp; \ldots &amp;amp; c_{n+2}\cr&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \cr&lt;br /&gt;
c_n &amp;amp; c_{n+1} &amp;amp;  c_{n+2} &amp;amp; \ldots &amp;amp; c_{2n}} \right| &amp;gt; 0&lt;br /&gt;
\]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1750</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1750"/>
		<updated>2007-03-08T15:11:20Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Advanced Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using jsMath.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the jsMath package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
jsMath is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about jsMath on [http://www.math.union.edu/~dpvc/jsMath/|jsMath homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
                &amp;lt;STYLE&amp;gt; #jsMath_Warning {display: none} &amp;lt;/STYLE&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                jsMath = {&lt;br /&gt;
                        Autoload: {&lt;br /&gt;
                                loadFonts: [&amp;quot;msam10&amp;quot;,&amp;quot;msbm10&amp;quot;],&lt;br /&gt;
                                findTeXstrings: 0,      // 1 to look for any tex-delimited math&lt;br /&gt;
                                findLaTeXstrings: 1     // 1 to look for \(...\) and \[...\] only&lt;br /&gt;
                        }&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT SRC=&amp;quot;/jsMath/plugins/autoload.js&amp;quot;&amp;gt;&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                window.onload = function () {&lt;br /&gt;
                        jsMath.Autoload.Check();&lt;br /&gt;
                        jsMath.Process(document);&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure jsMath is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
By modifying findTeXstrigs:0 to findTeXstrings:1 you could also make jsMath interpret correctly formulas delimited with $ ... $ and $$ ... $$ but you should then be careful how you use $ symbol in your webpages (just like you have to be careful with it in a real TeX/LaTeX document).&lt;br /&gt;
&lt;br /&gt;
== Advanced Use ==&lt;br /&gt;
For more advanced use please check the [http://www.math.union.edu/~dpvc/jsMath/|jsMath homepage] homepage.  All of the plugins and fonts are installed on math webservers so they can be used immediately. jsMath is installed under /jsMath (or /jsmath) as shown in above examples.&lt;br /&gt;
== Advanced Examples ==&lt;br /&gt;
This is taken from one of jsMath examples:&lt;br /&gt;
\[&lt;br /&gt;
\det\left|\,\matrix{&lt;br /&gt;
c_0 &amp;amp; c_1 &amp;amp; c_2 &amp;amp; \ldots &amp;amp; c_{n\phantom{+1}}\cr&lt;br /&gt;
c_1 &amp;amp; c_2 &amp;amp; c_3 &amp;amp; \ldots &amp;amp; c_{n+1}\cr&lt;br /&gt;
c_2 &amp;amp; c_3 &amp;amp; c_4 &amp;amp; \ldots &amp;amp; c_{n+2}\cr&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \cr&lt;br /&gt;
c_n &amp;amp; c_{n+1} &amp;amp;  c_{n+2} &amp;amp; \ldots &amp;amp; c_{2n}} \right| &amp;gt; 0&lt;br /&gt;
\]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1749</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1749"/>
		<updated>2007-03-08T15:10:35Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: more examples&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using jsMath.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the jsMath package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
jsMath is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about jsMath on [http://www.math.union.edu/~dpvc/jsMath/|jsMath homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
                &amp;lt;STYLE&amp;gt; #jsMath_Warning {display: none} &amp;lt;/STYLE&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                jsMath = {&lt;br /&gt;
                        Autoload: {&lt;br /&gt;
                                loadFonts: [&amp;quot;msam10&amp;quot;,&amp;quot;msbm10&amp;quot;],&lt;br /&gt;
                                findTeXstrings: 0,      // 1 to look for any tex-delimited math&lt;br /&gt;
                                findLaTeXstrings: 1     // 1 to look for \(...\) and \[...\] only&lt;br /&gt;
                        }&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT SRC=&amp;quot;/jsMath/plugins/autoload.js&amp;quot;&amp;gt;&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                window.onload = function () {&lt;br /&gt;
                        jsMath.Autoload.Check();&lt;br /&gt;
                        jsMath.Process(document);&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure jsMath is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
By modifying findTeXstrigs:0 to findTeXstrings:1 you could also make jsMath interpret correctly formulas delimited with $ ... $ and $$ ... $$ but you should then be careful how you use $ symbol in your webpages (just like you have to be careful with it in a real TeX/LaTeX document).&lt;br /&gt;
&lt;br /&gt;
== Advanced Use ==&lt;br /&gt;
For more advanced use please check the [http://www.math.union.edu/~dpvc/jsMath/|jsMath homepage] homepage.  All of the plugins and fonts are installed on math webservers so they can be used immediately. jsMath is installed under /jsMath (or /jsmath) as shown in above examples.&lt;br /&gt;
== Advanced Examples ==&lt;br /&gt;
This is taken from one jsMath examples:&lt;br /&gt;
\[&lt;br /&gt;
\det\left|\,\matrix{&lt;br /&gt;
c_0 &amp;amp; c_1 &amp;amp; c_2 &amp;amp; \ldots &amp;amp; c_{n\phantom{+1}}\cr&lt;br /&gt;
c_1 &amp;amp; c_2 &amp;amp; c_3 &amp;amp; \ldots &amp;amp; c_{n+1}\cr&lt;br /&gt;
c_2 &amp;amp; c_3 &amp;amp; c_4 &amp;amp; \ldots &amp;amp; c_{n+2}\cr&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; \vdots &amp;amp; \ddots &amp;amp; \vdots \cr&lt;br /&gt;
c_n &amp;amp; c_{n+1} &amp;amp;  c_{n+2} &amp;amp; \ldots &amp;amp; c_{2n}} \right| &amp;gt; 0&lt;br /&gt;
\]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1748</id>
		<title>HowTos:Add TeX to your webpage</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Add_TeX_to_your_webpage&amp;diff=1748"/>
		<updated>2007-03-08T15:06:40Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: initial webpage&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This HowTo provides basic instructions on how to add TeX based formulas to your webpages located on the math webserver by using jsMath.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
The main math webserver as well as the cgi webserver have the jsMath package installed on them and you can very easily use it to add TeX based formulas/text to your webpages hosted on math webservers.  &lt;br /&gt;
&lt;br /&gt;
jsMath is a javascript based software that can interpret TeX/LaTeX formulas embedded in your webpage and replace them with fonts and images to make them look as close as possible to the TeX/LaTeX output.  You can find extensive information about jsMath on [http://www.math.union.edu/~dpvc/jsMath/|jsMath homepage].  Here we will just suggest a few quick ways to use it, for more extensive information check jsMath website.&lt;br /&gt;
&lt;br /&gt;
== Quick Start ==&lt;br /&gt;
To get started insert the following html code somewhere in the &amp;lt;head&amp;gt; section of your webpage:&lt;br /&gt;
                &amp;lt;STYLE&amp;gt; #jsMath_Warning {display: none} &amp;lt;/STYLE&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                jsMath = {&lt;br /&gt;
                        Autoload: {&lt;br /&gt;
                                loadFonts: [&amp;quot;msam10&amp;quot;,&amp;quot;msbm10&amp;quot;],&lt;br /&gt;
                                findTeXstrings: 0,      // 1 to look for any tex-delimited math&lt;br /&gt;
                                findLaTeXstrings: 1     // 1 to look for \(...\) and \[...\] only&lt;br /&gt;
                        }&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT SRC=&amp;quot;/jsMath/plugins/autoload.js&amp;quot;&amp;gt;&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
                &amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
                window.onload = function () {&lt;br /&gt;
                        jsMath.Autoload.Check();&lt;br /&gt;
                        jsMath.Process(document);&lt;br /&gt;
                }&lt;br /&gt;
                &amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
This code will make sure jsMath is loaded if and only if you use LaTeX style formulas somewhere in the body of your document. That means that the following text:&lt;br /&gt;
                \( f(\alpha) = x+\beta \)&lt;br /&gt;
will get translated into inline formula as in: \( f(\alpha)=x+\beta \) - note how there is a small delay before the text gets converted into formulas.  For displayed equations you can do:&lt;br /&gt;
                \[ \int_alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
which gets translated like this: \[ \int_\alpha^\beta x = \mathbb{A} \]&lt;br /&gt;
&lt;br /&gt;
By modifying findTeXstrigs:0 to findTeXstrings:1 you could also make jsMath interpret correctly formulas delimited with $ ... $ and $$ ... $$ but you should then be careful how you use $ symbol in your webpages (just like you have to be careful with it in a real TeX/LaTeX document).&lt;br /&gt;
&lt;br /&gt;
== Advanced Use ==&lt;br /&gt;
For more advanced use please check the [http://www.math.union.edu/~dpvc/jsMath/|jsMath homepage] homepage.  All of the plugins and fonts are installed on math webservers so they can be used immediately. jsMath is installed under /jsMath (or /jsmath) as shown in above examples.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1747</id>
		<title>HowTos</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos&amp;diff=1747"/>
		<updated>2007-03-08T14:48:51Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: link to HowTo for tex on webpages&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you will find instructions on how to do some of the more common computing tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Certificates ==&lt;br /&gt;
If you are presented with warnings about unsigned certificate when visiting Math webmail or e-mail you will have to import our security certificate.  Please follow instructions for your operating system and/or program:&lt;br /&gt;
* [[HowTos:Certificate importing for Mozilla applications|Certificate import for Mozilla, Thunderbird of Firefox]] (on any operating system)&lt;br /&gt;
* [[HowTos:Certificate importing for Windows|Certificate import for Windows]] (used by Internet Explorer, Outlook or Outlook Express)&lt;br /&gt;
* [[HowTos:Certificate importing for MacOSX|Certificate import for Mac OS X]] (used by Safari and Mac Mail program)&lt;br /&gt;
&lt;br /&gt;
== Connect to Math/PACM systems remotely ==&lt;br /&gt;
There are a number of different ways to access Math/PACM systems and services - login servers, computational machines, E-mail, files on file server and others.  Here are some of these ways:&lt;br /&gt;
* [[HowTos:Access your files on Math/PACM file server via cifs/samba|Access your files on Math/PACM file server via cifs/samba on Windows, Mac OS X or Linux]] - directly access your files on the file server, on campus or after connecting via VPN&lt;br /&gt;
* [[HowTos:Connect to login servers via ssh|Connect to login servers via ssh from Windows, Mac OS X or Linux]] (also copy files back and forth by using ssh/scp)&lt;br /&gt;
* [[HowTos:Remote Linux Desktop access|Remote Linux Desktop access]]&lt;br /&gt;
For E-mail reading/access only please read below.&lt;br /&gt;
&lt;br /&gt;
== E-mail access and configuration ==&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird on Math Linux machines|Configure Thunderbird on Math Linux workstations]]&lt;br /&gt;
* [[HowTos:E-mail configuration for Thunderbird|Configure Thunderbird or Mozilla in general]]&lt;br /&gt;
* [[HowTos:Read E-mail with webmail|Read your e-mail in your web browser by using Horde/IMP webmail]]&lt;br /&gt;
&lt;br /&gt;
== File restore/undelete/backup/snapshots ==&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Linux from home directory on Math file server|How to restore deleted files or previous versions on Linux from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Windows from home directory on Math file server|How to restore deleted files or previous versions on Windows from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from snapshots on Mac OS X from home directory on Math file server|How to restore deleted files or previous versions on Mac OS X from home directory on Math/PACM file server]] (for files deleted or changed within last 4 days)&lt;br /&gt;
* [[HowTos:Restore files from backups|How to obtain files from backups]] (for files deleted or changed more than 4 days ago and usually not more than 3-4 months ago)&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
* [[HowTos:Configure MacOSX for Dell W5300n|How to configure your Macintosh for printing with the Dell printers on 11th and 5th floor (W5300n)]]&lt;br /&gt;
* [[HowTos:Configure Windows Printing for Fine Hall|How to configure your Microsoft Windows computer for printing to public printers in Fine Hall]]&lt;br /&gt;
&lt;br /&gt;
== TeX ==&lt;br /&gt;
* [[HowTos:Install TeX on a Microsoft Windows computer|A quick HowTo about installing TeX on a Microsoft Windows computer]]&lt;br /&gt;
* [[HowTos:Add TeX to your webpage|How to add good looking TeX code to your webpages on Math webserver]]&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=MediaWiki:Sidebar&amp;diff=1746</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=MediaWiki:Sidebar&amp;diff=1746"/>
		<updated>2007-03-07T21:34:43Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* navigation&lt;br /&gt;
** mainpage|mainpage&lt;br /&gt;
** howtos-url|howtos&lt;br /&gt;
** faq-url|faq&lt;br /&gt;
** docs-url|docs&lt;br /&gt;
** news-url|news&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Configure_Windows_Printing_for_Fine_Hall&amp;diff=1745</id>
		<title>HowTos:Configure Windows Printing for Fine Hall</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Configure_Windows_Printing_for_Fine_Hall&amp;diff=1745"/>
		<updated>2006-10-16T16:07:34Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In order to print to one of the public printers in Fine Hall from a Windows computer (for example your laptop) you can use the windows print server '''&amp;lt;tt&amp;gt;finehallprint&amp;lt;/tt&amp;gt;'''. Note that you can access and use Fine Hall printers only within Fine Hall. &lt;br /&gt;
&lt;br /&gt;
These are publicly available printers:&lt;br /&gt;
{|&lt;br /&gt;
| '''Windows printer name'''&lt;br /&gt;
| '''Printer location'''&lt;br /&gt;
| '''Printer type'''&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine205&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 205 Fine Hall&lt;br /&gt;
| HP LaserJet 4250 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine219&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 219 Fine Hall cluster&lt;br /&gt;
| HP LaserJet 4300 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine305&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 305 Fine Hall (restricted access outside business hours)&lt;br /&gt;
| HP LaserJet 4350 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine305pcl&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 305 Fine Hall (restricted access outside business hours)&lt;br /&gt;
| HP LaserJet 4350 Duplex PCL driver&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine511&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 5th floor Fine Hall, outside offices 504 and 505&lt;br /&gt;
| Dell W5300 Duplex printer&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine811&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 8th floor Fine Hall, outside offices 804 and 805&lt;br /&gt;
| HP LaserJet 4300 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| ''&amp;lt;tt&amp;gt;\\finehallprint\fine1111&amp;lt;/tt&amp;gt;''&lt;br /&gt;
| 11th floor Fine Hall, outside offices 1104 and 1105&lt;br /&gt;
| Dell W5300 Duplex printer&lt;br /&gt;
|}&lt;br /&gt;
While you may find other printers on &amp;lt;tt&amp;gt;finehallprint&amp;lt;/tt&amp;gt; they are private printers reserved for use by their owners so please do not try to use them.&lt;br /&gt;
&lt;br /&gt;
Note also that all of Fine Hall printers default to duplex (double sided) printing so you may have to change your printer settings if you want to print single sided.&lt;br /&gt;
&lt;br /&gt;
== Detailed instructions ==&lt;br /&gt;
Example instructions on how to setup one of these printers, for example fine305, on your computer follow.  First click on &amp;quot;Start&amp;quot; button (1) and then on &amp;quot;Run&amp;quot; (2):&lt;br /&gt;
[[Image:Fs-startrun.jpg|center]]&lt;br /&gt;
In the &amp;quot;Run&amp;quot; window that will come up type the printer address from above table in &amp;quot;Open&amp;quot; (1). In our case for printer 305 it is ''&amp;lt;tt&amp;gt;\\finehallprint\fine305&amp;lt;/tt&amp;gt;''.  Then click on &amp;quot;OK&amp;quot; (2):&lt;br /&gt;
[[Image:Finehallprint-run305.jpg|center]]&lt;br /&gt;
Your computer will then attempt to connect and it may ask you to confirm the installation with a dialog that resembles the following where you should click on &amp;quot;Yes&amp;quot; (1):&lt;br /&gt;
[[Image:Finehallprint-confirm.jpg|center]]&lt;br /&gt;
That's it - you should now be able to use this printer.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Public_printers&amp;diff=1744</id>
		<title>Documentation and Information:Public printers</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Documentation_and_Information:Public_printers&amp;diff=1744"/>
		<updated>2006-10-16T15:43:51Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: fixed printers&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the list of publicly accessible printers with their names, their locations and types.&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
| '''Linux printer name''' &lt;br /&gt;
| '''Windows printer name''' &lt;br /&gt;
| '''Location''' &lt;br /&gt;
| '''Type'''&lt;br /&gt;
|-&lt;br /&gt;
| 205 &lt;br /&gt;
| fine205 &lt;br /&gt;
| 205 Fine Hall &lt;br /&gt;
| HP LaserJet 4250 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| 219&lt;br /&gt;
| fine219 &lt;br /&gt;
| 219 Fine Hall &lt;br /&gt;
| HP LaserJet 4300 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| 305&lt;br /&gt;
| fine305 &lt;br /&gt;
| 305 Fine Hall &lt;br /&gt;
| HP LaserJet 4350 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| 511&lt;br /&gt;
| fine511&lt;br /&gt;
| 5th floor Fine Hall, in front of 504 and 505 offices &lt;br /&gt;
| Dell W5300n printer Duplex&lt;br /&gt;
|-&lt;br /&gt;
| 811&lt;br /&gt;
| fine811&lt;br /&gt;
| 8th floor Fine Hall, in front of 804 and 805 offices &lt;br /&gt;
| HP LaserJet 4300 Duplex&lt;br /&gt;
|-&lt;br /&gt;
| 1111&lt;br /&gt;
| fine1111&lt;br /&gt;
| 11th floor Fine Hall, in front of 1104 and 1105 offices &lt;br /&gt;
| Dell W5300n printer Duplex&lt;br /&gt;
|}&lt;br /&gt;
205 and 305 printers are publicly accessible only during work hours, for other times keys to relevant rooms/areas are required.&lt;br /&gt;
&lt;br /&gt;
== Linux printer names and variants ==&lt;br /&gt;
On Fine Hall Linux workstations and servers printer names are just room numbers, as listed in the first column of the above table.  &lt;br /&gt;
&lt;br /&gt;
By default all printing from Linux is duplex, i.e. printer will use both sides of the paper.  If you want to print one sided then you can add a letter s to the name of your chosen printer.  For example you can print to 305s or 1111s.&lt;br /&gt;
&lt;br /&gt;
If you are having problems with printing certain files, especially certain acrobat pdf documents, where when you print them nothing comes out of the printer then you can try printing with pcl driver.  In order to do that add pcl to the name of the printer.  For example if you are trying to print to 511 then use 511pcl instead.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1743</id>
		<title>Frequently Asked Questions</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1743"/>
		<updated>2006-10-04T15:19:39Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Run computations and disconnect */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page you will find answers to some of most frequently asked questions about computing at Fine Hall.&lt;br /&gt;
&lt;br /&gt;
== E-mail ==&lt;br /&gt;
=== Read e-mail ===&lt;br /&gt;
You can read your e-mail in one of the following ways: &lt;br /&gt;
* login via ssh to math.princeton.edu or pacm.princeton.edu and use pine, mutt or other terminal based E-mail clients&lt;br /&gt;
* use [http://www.math.princeton.edu/mail Math Dept. WebMail]&lt;br /&gt;
* configure your e-mail client (like Thunderbird, Mozilla, Outlook and others) to access your e-mail via IMAP by following instructions in [[HowTos#E-mail configuration|HowTos section about E-mail configuration]]&lt;br /&gt;
&lt;br /&gt;
=== Forward e-mail from Princeton to your Math/PACM account ===&lt;br /&gt;
Open up in your browser [http://www.princeton.edu/imap OIT Account Management Page].  You will be asked for your OIT username and password.  Once logged in click on &amp;quot;Set Email Delivery&amp;quot; link on the left.  That will bring &amp;quot;Where Is My Mail Going&amp;quot; information - if you you haven't changed your e-mail delivery location from default it is likely to be &amp;lt;tt&amp;gt;yourusername@mail.Princeton.EDU&amp;lt;/tt&amp;gt;.  Click on &amp;quot;Change Entry&amp;quot; button (found next to current delivery E-mail) and on next screen forward your Priceton email by setting your primary mail delivery location to your &amp;lt;tt&amp;gt;yourusername@math.princeton.edu&amp;lt;/tt&amp;gt; E-mail address and then click on &amp;quot;Submit Changes&amp;quot;.&lt;br /&gt;
=== Forward e-mail from your math account ===&lt;br /&gt;
To forward all of your math e-mail to another account, e.g. if you are leaving Princeton, create in your home directory .forward file that contains the e-mail address where your email will be forwarded.  You can specify multiple E-mail addresses, each on its own line.&lt;br /&gt;
=== Vacation messages ===&lt;br /&gt;
Vacation messages can be set through Math/PACM webmail by going to  [https://www.math.princeton.edu/horde3/vacation/ https://www.math.princeton.edu/horde3/vacation/].  You can also find vacation setting webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
At vacation webpage you can turn the vacation message on and off, specify the subject and content of vacation message replies and specify how often to send vacation message replies.  Finally you can even set vacation start and end times.&lt;br /&gt;
&lt;br /&gt;
== Passwords ==&lt;br /&gt;
=== Types of passwords ===&lt;br /&gt;
Your Math/PACM account has two passwords associated with it - the Linux/LDAP password which is used for everything except accessing the fileserver through windows file sharing (also called smb, cifs of samba file sharing) and the windows/cifs password.&lt;br /&gt;
=== Password changing ===&lt;br /&gt;
If you need to change your password you should do it through the [https://www.math.princeton.edu/horde3/passwd/ Math/PACM webmail interface].  This way your LDAP password will be changed together with your windows/cifs password and therefore this will ensure they are the same. One logged in with your current password you will be prompted for your current and new password.&lt;br /&gt;
&lt;br /&gt;
You can also find password changing webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
== Running computations ==&lt;br /&gt;
=== Computation guidelines ===&lt;br /&gt;
These are the guidelines for running computations on Math/PACM machines:&lt;br /&gt;
* Unless running computations on dedicated machines (like Comp or Macomp cluster or your own desktop) all jobs should be reniced to -19, e.g.:&lt;br /&gt;
 nice -n 19 mycomputation&lt;br /&gt;
This will achieve that users of the machine you are using for your calculations are not impacted in their interactive use.  Your job will still get all the available free CPU time.&lt;br /&gt;
* Please make sure your computation does not consume too much memory.  This is particularly important if you intend to run your computations on desktops used by others.  Too much memory use on machines that do not have much to begin with will push the operating system into swapping which will severly impact both the user of the desktop and your own computation.  Most Fine Hall desktops have only 512MB so you should make sure your job doesn't consume more than, say, 100MB or so - the less the better.  &lt;br /&gt;
* If your job requires a lot of memory and you do not have access to macomp cluster please feel free to run it on the login server - math.princeton.edu - which has both a pair of very fast processors and 4GB of memory.  You should still limit your job to not more than 2GB of memory (or 3GB but only for a short period of time).  Also take in account your per job memory consumption and the number of jobs you and others are running already on math.princeton.edu.  E.g. running more than 1 computation that requires 2GB or more will quickly produce a non productive environment for all the users.&lt;br /&gt;
* Computational jobs on math.princeton.edu are automatically reniced and you should limit yourself to at most 2 computations at any one time.  If your computation is a long lasting one you do not have to renice your job but if you intend to run lots of short ones please do so (as automatic renicing does not kick in immediately).&lt;br /&gt;
=== Run computations and disconnect ===&lt;br /&gt;
If your computation is a long lasting one it is best if you start it up in such a way that you can logout and the computation will continue.  This also prevents your computations from failing if your loose network connectivity.  TO achieve this you should run your computations with nohup command.  Nohup command will make sure the job is disconnected from the terminal, or in other words it will make sure that when you disconnect your job will not get the &amp;quot;I have loged out, please quit&amp;quot; signal.  For example you should type something like this:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; my_output.txt 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
This will run your job, reniced to 19, and any output (both regular and error) will be placed into file my_output.txt.    In other words If you want to place the error output into another file you can do:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; my_output.txt 2&amp;gt; my_erroroutput.txt &amp;amp;&lt;br /&gt;
If you do not need the output from the command, e.g. because your program dumps its results directly into various files, you can redirect all of the other output into /dev/null:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; /dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
&lt;br /&gt;
=== Run disconnected computations for matlab ===&lt;br /&gt;
If you want to run matlab computations you can do something like:&lt;br /&gt;
 nohup nice -19 matlab -nodisplay -nodesktop -nojvm -nosplash &amp;lt; mymatlab_commands.m &amp;gt; my_output.txt 2&amp;gt;&amp;amp;1 &amp;amp;&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1742</id>
		<title>Frequently Asked Questions</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1742"/>
		<updated>2006-10-03T13:46:42Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: /* Run computations and disconnect */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page you will find answers to some of most frequently asked questions about computing at Fine Hall.&lt;br /&gt;
&lt;br /&gt;
== E-mail ==&lt;br /&gt;
=== Read e-mail ===&lt;br /&gt;
You can read your e-mail in one of the following ways: &lt;br /&gt;
* login via ssh to math.princeton.edu or pacm.princeton.edu and use pine, mutt or other terminal based E-mail clients&lt;br /&gt;
* use [http://www.math.princeton.edu/mail Math Dept. WebMail]&lt;br /&gt;
* configure your e-mail client (like Thunderbird, Mozilla, Outlook and others) to access your e-mail via IMAP by following instructions in [[HowTos#E-mail configuration|HowTos section about E-mail configuration]]&lt;br /&gt;
&lt;br /&gt;
=== Forward e-mail from Princeton to your Math/PACM account ===&lt;br /&gt;
Open up in your browser [http://www.princeton.edu/imap OIT Account Management Page].  You will be asked for your OIT username and password.  Once logged in click on &amp;quot;Set Email Delivery&amp;quot; link on the left.  That will bring &amp;quot;Where Is My Mail Going&amp;quot; information - if you you haven't changed your e-mail delivery location from default it is likely to be &amp;lt;tt&amp;gt;yourusername@mail.Princeton.EDU&amp;lt;/tt&amp;gt;.  Click on &amp;quot;Change Entry&amp;quot; button (found next to current delivery E-mail) and on next screen forward your Priceton email by setting your primary mail delivery location to your &amp;lt;tt&amp;gt;yourusername@math.princeton.edu&amp;lt;/tt&amp;gt; E-mail address and then click on &amp;quot;Submit Changes&amp;quot;.&lt;br /&gt;
=== Forward e-mail from your math account ===&lt;br /&gt;
To forward all of your math e-mail to another account, e.g. if you are leaving Princeton, create in your home directory .forward file that contains the e-mail address where your email will be forwarded.  You can specify multiple E-mail addresses, each on its own line.&lt;br /&gt;
=== Vacation messages ===&lt;br /&gt;
Vacation messages can be set through Math/PACM webmail by going to  [https://www.math.princeton.edu/horde3/vacation/ https://www.math.princeton.edu/horde3/vacation/].  You can also find vacation setting webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
At vacation webpage you can turn the vacation message on and off, specify the subject and content of vacation message replies and specify how often to send vacation message replies.  Finally you can even set vacation start and end times.&lt;br /&gt;
&lt;br /&gt;
== Passwords ==&lt;br /&gt;
=== Types of passwords ===&lt;br /&gt;
Your Math/PACM account has two passwords associated with it - the Linux/LDAP password which is used for everything except accessing the fileserver through windows file sharing (also called smb, cifs of samba file sharing) and the windows/cifs password.&lt;br /&gt;
=== Password changing ===&lt;br /&gt;
If you need to change your password you should do it through the [https://www.math.princeton.edu/horde3/passwd/ Math/PACM webmail interface].  This way your LDAP password will be changed together with your windows/cifs password and therefore this will ensure they are the same. One logged in with your current password you will be prompted for your current and new password.&lt;br /&gt;
&lt;br /&gt;
You can also find password changing webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
== Running computations ==&lt;br /&gt;
=== Computation guidelines ===&lt;br /&gt;
These are the guidelines for running computations on Math/PACM machines:&lt;br /&gt;
* Unless running computations on dedicated machines (like Comp or Macomp cluster or your own desktop) all jobs should be reniced to -19, e.g.:&lt;br /&gt;
 nice -n 19 mycomputation&lt;br /&gt;
This will achieve that users of the machine you are using for your calculations are not impacted in their interactive use.  Your job will still get all the available free CPU time.&lt;br /&gt;
* Please make sure your computation does not consume too much memory.  This is particularly important if you intend to run your computations on desktops used by others.  Too much memory use on machines that do not have much to begin with will push the operating system into swapping which will severly impact both the user of the desktop and your own computation.  Most Fine Hall desktops have only 512MB so you should make sure your job doesn't consume more than, say, 100MB or so - the less the better.  &lt;br /&gt;
* If your job requires a lot of memory and you do not have access to macomp cluster please feel free to run it on the login server - math.princeton.edu - which has both a pair of very fast processors and 4GB of memory.  You should still limit your job to not more than 2GB of memory (or 3GB but only for a short period of time).  Also take in account your per job memory consumption and the number of jobs you and others are running already on math.princeton.edu.  E.g. running more than 1 computation that requires 2GB or more will quickly produce a non productive environment for all the users.&lt;br /&gt;
* Computational jobs on math.princeton.edu are automatically reniced and you should limit yourself to at most 2 computations at any one time.  If your computation is a long lasting one you do not have to renice your job but if you intend to run lots of short ones please do so (as automatic renicing does not kick in immediately).&lt;br /&gt;
=== Run computations and disconnect ===&lt;br /&gt;
If your computation is a long lasting one it is best if you start it up in such a way that you can logout and the computation will continue.  This also prevents your computations from failing if your loose network connectivity.  You should run your computations with a command of this type:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; my_output.txt 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
This will run your job, reniced to 19, and any output (both regular and error) will be placed into file my_output.txt.  If you want to place the error output into another file you can do:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; my_output.txt 2&amp;gt; my_erroroutput.txt &amp;amp;&lt;br /&gt;
If you do not need the output from the command, e.g. because your program dumps its results directly into various files, you can redirect all of the other output into /dev/null:&lt;br /&gt;
 nohup nice -19 mycomputation_program &amp;gt; /dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&lt;br /&gt;
=== Run disconnected computations for matlab ===&lt;br /&gt;
If you want to run matlab computations you can do something like:&lt;br /&gt;
 nohup nice -19 matlab -nodisplay -nodesktop -nojvm -nosplash &amp;lt; mymatlab_commands.m &amp;gt; my_output.txt 2&amp;gt;&amp;amp;1 &amp;amp;&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1741</id>
		<title>Frequently Asked Questions</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Frequently_Asked_Questions&amp;diff=1741"/>
		<updated>2006-10-03T13:36:52Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: added guidelines&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this page you will find answers to some of most frequently asked questions about computing at Fine Hall.&lt;br /&gt;
&lt;br /&gt;
== E-mail ==&lt;br /&gt;
=== Read e-mail ===&lt;br /&gt;
You can read your e-mail in one of the following ways: &lt;br /&gt;
* login via ssh to math.princeton.edu or pacm.princeton.edu and use pine, mutt or other terminal based E-mail clients&lt;br /&gt;
* use [http://www.math.princeton.edu/mail Math Dept. WebMail]&lt;br /&gt;
* configure your e-mail client (like Thunderbird, Mozilla, Outlook and others) to access your e-mail via IMAP by following instructions in [[HowTos#E-mail configuration|HowTos section about E-mail configuration]]&lt;br /&gt;
&lt;br /&gt;
=== Forward e-mail from Princeton to your Math/PACM account ===&lt;br /&gt;
Open up in your browser [http://www.princeton.edu/imap OIT Account Management Page].  You will be asked for your OIT username and password.  Once logged in click on &amp;quot;Set Email Delivery&amp;quot; link on the left.  That will bring &amp;quot;Where Is My Mail Going&amp;quot; information - if you you haven't changed your e-mail delivery location from default it is likely to be &amp;lt;tt&amp;gt;yourusername@mail.Princeton.EDU&amp;lt;/tt&amp;gt;.  Click on &amp;quot;Change Entry&amp;quot; button (found next to current delivery E-mail) and on next screen forward your Priceton email by setting your primary mail delivery location to your &amp;lt;tt&amp;gt;yourusername@math.princeton.edu&amp;lt;/tt&amp;gt; E-mail address and then click on &amp;quot;Submit Changes&amp;quot;.&lt;br /&gt;
=== Forward e-mail from your math account ===&lt;br /&gt;
To forward all of your math e-mail to another account, e.g. if you are leaving Princeton, create in your home directory .forward file that contains the e-mail address where your email will be forwarded.  You can specify multiple E-mail addresses, each on its own line.&lt;br /&gt;
=== Vacation messages ===&lt;br /&gt;
Vacation messages can be set through Math/PACM webmail by going to  [https://www.math.princeton.edu/horde3/vacation/ https://www.math.princeton.edu/horde3/vacation/].  You can also find vacation setting webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
At vacation webpage you can turn the vacation message on and off, specify the subject and content of vacation message replies and specify how often to send vacation message replies.  Finally you can even set vacation start and end times.&lt;br /&gt;
&lt;br /&gt;
== Passwords ==&lt;br /&gt;
=== Types of passwords ===&lt;br /&gt;
Your Math/PACM account has two passwords associated with it - the Linux/LDAP password which is used for everything except accessing the fileserver through windows file sharing (also called smb, cifs of samba file sharing) and the windows/cifs password.&lt;br /&gt;
=== Password changing ===&lt;br /&gt;
If you need to change your password you should do it through the [https://www.math.princeton.edu/horde3/passwd/ Math/PACM webmail interface].  This way your LDAP password will be changed together with your windows/cifs password and therefore this will ensure they are the same. One logged in with your current password you will be prompted for your current and new password.&lt;br /&gt;
&lt;br /&gt;
You can also find password changing webpage under the &amp;quot;My Account&amp;quot; on the left side menu of the [https://www.math.princeton.edu/mail webmail].&lt;br /&gt;
&lt;br /&gt;
== Running computations ==&lt;br /&gt;
=== Computation guidelines ===&lt;br /&gt;
These are the guidelines for running computations on Math/PACM machines:&lt;br /&gt;
* Unless running computations on dedicated machines (like Comp or Macomp cluster or your own desktop) all jobs should be reniced to -19, e.g.:&lt;br /&gt;
 nice -n 19 mycomputation&lt;br /&gt;
This will achieve that users of the machine you are using for your calculations are not impacted in their interactive use.  Your job will still get all the available free CPU time.&lt;br /&gt;
* Please make sure your computation does not consume too much memory.  This is particularly important if you intend to run your computations on desktops used by others.  Too much memory use on machines that do not have much to begin with will push the operating system into swapping which will severly impact both the user of the desktop and your own computation.  Most Fine Hall desktops have only 512MB so you should make sure your job doesn't consume more than, say, 100MB or so - the less the better.  &lt;br /&gt;
* If your job requires a lot of memory and you do not have access to macomp cluster please feel free to run it on the login server - math.princeton.edu - which has both a pair of very fast processors and 4GB of memory.  You should still limit your job to not more than 2GB of memory (or 3GB but only for a short period of time).  Also take in account your per job memory consumption and the number of jobs you and others are running already on math.princeton.edu.  E.g. running more than 1 computation that requires 2GB or more will quickly produce a non productive environment for all the users.&lt;br /&gt;
* Computational jobs on math.princeton.edu are automatically reniced and you should limit yourself to at most 2 computations at any one time.  If your computation is a long lasting one you do not have to renice your job but if you intend to run lots of short ones please do so (as automatic renicing does not kick in immediately).&lt;br /&gt;
=== Run computations and disconnect ===&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=News&amp;diff=1499</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=News&amp;diff=1499"/>
		<updated>2006-03-10T15:24:01Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage contains news about computing changes at Fine Hall.  In particular you will find news about significant software and server updates, outages and other relevant news organized by month and year.&lt;br /&gt;
&lt;br /&gt;
== March, 2006 ==&lt;br /&gt;
=== March 10, 2006 ===&lt;br /&gt;
* Kernel upgrade on all workstation going in the background (workstations rebooting during idle times), part of the PU_IAS 2/2WS U3 upgrade where more than 100 rpms are being upgraded. ([[User:Plazonic|Plazonic]] 10:23, 10 March 2006 (EST))&lt;br /&gt;
* Login server pacm.princeton.edu and xdm server xdm.princeton.edu rebooted today for kernel upgrade. ([[User:Plazonic|Plazonic]] 10:24, 10 March 2006 (EST))&lt;br /&gt;
* Maintenance reboot for the login server math.princeton.edu scheduled for 4:45pm for the kernel upgrade.  Short outage during which pacm.princeton.edu will continue to be available. ([[User:Plazonic|Plazonic]] 10:25, 10 March 2006 (EST))&lt;br /&gt;
* Various services and servers will be rebooted during next few days in order to perform various upgrades, including kernel upgrade. These will mostly be transparent to end users or else cause very short outages. ([[User:Plazonic|Plazonic]] 10:26, 10 March 2006 (EST))&lt;br /&gt;
&lt;br /&gt;
== November, 2005 ==&lt;br /&gt;
=== November 30, 2005 ===&lt;br /&gt;
* PACM public printer located in room 205 (inventively called &amp;quot;205&amp;quot;) replaced with a new HP LaserJet 4250DTN.  All printer names and aliases remain the same.  The old 205 printer (HP LaserJet 4050DN) moved to Fine Hall 221 (name &amp;quot;221&amp;quot;). ([[User:Plazonic|Plazonic]] 16:43, 30 November 2005 (EST))&lt;br /&gt;
=== November 21, 2005 ===&lt;br /&gt;
* mpich-p4 compiled with gcc, intel compiler version 8.1 and intel compiler version 9.1 pushed for installation on all Linux desktops/workstations/login servers and the old AMD computational cluster. Command line tool to use to choose the version of mpich is &amp;lt;tt&amp;gt;mpichset&amp;lt;/tt&amp;gt; ([[User:Plazonic|Plazonic]] 16:51, 21 November 2005 (EST))&lt;br /&gt;
=== November 18, 2005 ===&lt;br /&gt;
* Intel compiler version 8.1 upgraded to a newer revision and also added version 9.0 on all Linux desktops/workstations/login servers and the old AMD computational cluster.  Also added a new command line tool to set the version of compiler you want to use - &amp;lt;tt&amp;gt;intelcompilerset&amp;lt;/tt&amp;gt;. ([[User:Plazonic|Plazonic]] 16:30, 18 November 2005 (EST))&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=News&amp;diff=1498</id>
		<title>News</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=News&amp;diff=1498"/>
		<updated>2006-03-10T15:23:03Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: March 10th 2006 upgrades&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This webpage contains news about computing changes at Fine Hall.  In particular you will find news about significant software and server updates, outages and other relevant news organized by month and year.&lt;br /&gt;
&lt;br /&gt;
== March, 2006 ==&lt;br /&gt;
=== March 10, 2006 ===&lt;br /&gt;
* Kernel upgrade on all workstation going in the background (workstations rebooting during idle times), part of the PU_IAS 2/2WS U3 upgrade where more than 100 rpms are being upgraded.&lt;br /&gt;
* Login server pacm.princeton.edu and xdm server xdm.princeton.edu rebooted today for kernel upgrade.&lt;br /&gt;
* Maintenance reboot for the login server math.princeton.edu scheduled for 4:45pm for the kernel upgrade.  Short outage during which pacm.princeton.edu will continue to be available. &lt;br /&gt;
* Various services and servers will be rebooted during next few days in order to perform various upgrades, including kernel upgrade. These will mostly be transparent to end users or else cause very short outages.&lt;br /&gt;
&lt;br /&gt;
== November, 2005 ==&lt;br /&gt;
=== November 30, 2005 ===&lt;br /&gt;
* PACM public printer located in room 205 (inventively called &amp;quot;205&amp;quot;) replaced with a new HP LaserJet 4250DTN.  All printer names and aliases remain the same.  The old 205 printer (HP LaserJet 4050DN) moved to Fine Hall 221 (name &amp;quot;221&amp;quot;). ([[User:Plazonic|Plazonic]] 16:43, 30 November 2005 (EST))&lt;br /&gt;
=== November 21, 2005 ===&lt;br /&gt;
* mpich-p4 compiled with gcc, intel compiler version 8.1 and intel compiler version 9.1 pushed for installation on all Linux desktops/workstations/login servers and the old AMD computational cluster. Command line tool to use to choose the version of mpich is &amp;lt;tt&amp;gt;mpichset&amp;lt;/tt&amp;gt; ([[User:Plazonic|Plazonic]] 16:51, 21 November 2005 (EST))&lt;br /&gt;
=== November 18, 2005 ===&lt;br /&gt;
* Intel compiler version 8.1 upgraded to a newer revision and also added version 9.0 on all Linux desktops/workstations/login servers and the old AMD computational cluster.  Also added a new command line tool to set the version of compiler you want to use - &amp;lt;tt&amp;gt;intelcompilerset&amp;lt;/tt&amp;gt;. ([[User:Plazonic|Plazonic]] 16:30, 18 November 2005 (EST))&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=MediaWiki:Sidebar&amp;diff=1492</id>
		<title>MediaWiki:Sidebar</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=MediaWiki:Sidebar&amp;diff=1492"/>
		<updated>2006-03-08T17:00:58Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: Add menu option for EAC group&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* navigation&lt;br /&gt;
** mainpage|mainpage&lt;br /&gt;
** howtos-url|howtos&lt;br /&gt;
** faq-url|faq&lt;br /&gt;
** docs-url|docs&lt;br /&gt;
** eacdocs-url|eacdocs&lt;br /&gt;
** news-url|news&lt;br /&gt;
** recentchanges-url|recentchanges&lt;br /&gt;
** randompage-url|randompage&lt;br /&gt;
** helppage|help&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Restore_files_from_snapshots_on_Linux_from_home_directory_on_Math_file_server&amp;diff=1491</id>
		<title>HowTos:Restore files from snapshots on Linux from home directory on Math file server</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=HowTos:Restore_files_from_snapshots_on_Linux_from_home_directory_on_Math_file_server&amp;diff=1491"/>
		<updated>2006-01-19T01:32:26Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: make hidden bold so that it is even more emphasized&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Netapp snapshot description}}&lt;br /&gt;
&lt;br /&gt;
== Restore File From Snapshots on Linux ==&lt;br /&gt;
These instructions describe how to restore a deleted or older copy of a file from your home directory on a Linux machine.  &lt;br /&gt;
&lt;br /&gt;
First, change directories to where the file was/is located - open a terminal (either on a linux desktop or ssh to one of login servers) and cd to that directory.  For example, if the deleted file was &amp;lt;tt&amp;gt;thesis/1st_draft/chapter1.tex&amp;lt;/tt&amp;gt; then make sure you are in the directory &amp;lt;tt&amp;gt;thesis/1st_draft&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd thesis/1st_draft&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Proceed by changing into .snapshot directory.  This is a '''hidden''' subdirectory that exists in every directory (in your home dir) and that contains all snapshots:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;cd .snapshot&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In there you will find directories like: nightly.0, nightly.1, ..., nightly.2, hourly.0, hourly.1, ..., hourly.10 (just type ''ls'' to see them).  Change to the directory that still contains your file and copy it back to where it was.  E.g.&lt;br /&gt;
{|cellspacing=&amp;quot;4&amp;quot;&lt;br /&gt;
|&amp;lt;tt&amp;gt;cd nightly.0&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;cp chapter1.tex ../..&amp;lt;/tt&amp;gt; || or || &amp;lt;tt&amp;gt;cp chapter1.tex ~/thesis/1st_draft&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
An even better way to restore files is to look at all copies of our missing file in all of the snapshots.  In our example you could do &amp;lt;tt&amp;gt;ls -l */chapter1.tex&amp;lt;/tt&amp;gt; (after changing the directory to .snapshot) which will result in an output that resembles:&lt;br /&gt;
{|cellspacing=&amp;quot;4&amp;quot;&lt;br /&gt;
|&amp;lt;tt&amp;gt;-rw-------  1 mathuser grad 20323 Jul  5 16:44 hourly.0/chapter1.tex&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;-rw-------  1 mathuser grad 19800 Jul  5 15:32 hourly.1/chapter1.tex&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;-rw-------  1 mathuser grad 19543 Jul  5 13:20 hourly.2/chapter1.tex&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;-rw-------  1 mathuser grad 18702 Jul  5 09:16 hourly.3/chapter1.tex&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;tt&amp;gt;-rw-------  1 mathuser grad 18702 Jul  5 09:16 hourly.4/chapter1.tex&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
Then you can pick the precise versio of the file you want to restore.  This is particularly useful if you need to retrieve an older copy of the file, e.g. in case you made changes that you want to revert or if you have overwritten it by mistake.&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
	<entry>
		<id>https://cgi.math.princeton.edu/compudocwiki/index.php?title=Main_Page&amp;diff=1490</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://cgi.math.princeton.edu/compudocwiki/index.php?title=Main_Page&amp;diff=1490"/>
		<updated>2005-12-07T19:20:21Z</updated>

		<summary type="html">&lt;p&gt;Plazonic: images should be http not https&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Quick Links ==&lt;br /&gt;
{| border=&amp;quot;0&amp;quot; cellpadding=&amp;quot;2&amp;quot; width=&amp;quot;90%&amp;quot;&lt;br /&gt;
! [http://math.princeton.edu/ssh.html http://cgi.math.princeton.edu/compudocwiki/images/7/74/Terminal.jpg]!![https://www.math.princeton.edu/mail http://cgi.math.princeton.edu/compudocwiki/images/2/24/Email.jpg]&lt;br /&gt;
|-&lt;br /&gt;
! Web SSH http://math.princeton.edu/ssh.html &amp;lt;br&amp;gt; Alternate Web SSH http://math.princeton.edu/ssh2.html!! Webmail https://www.math.princeton.edu/mail&lt;br /&gt;
|- style=&amp;quot;height:40px&amp;quot;&lt;br /&gt;
! &lt;br /&gt;
|- &lt;br /&gt;
! [https://www.math.princeton.edu/horde3/passwd/ http://cgi.math.princeton.edu/compudocwiki/images/6/64/Password.jpg] !! [https://www.math.princeton.edu/horde3/vacation/ http://cgi.math.princeton.edu/compudocwiki/images/3/3a/Vacation.jpg]&lt;br /&gt;
|- &lt;br /&gt;
! Change Password !! Set Vacation Message&lt;br /&gt;
|- &lt;br /&gt;
|+&amp;amp;nbsp;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Contact us or ask for help =&lt;br /&gt;
In order to contact Math/PACM computing support please e-mail [mailto:compudoc@princeton.edu compudoc@princeton.edu].&lt;/div&gt;</summary>
		<author><name>Plazonic</name></author>
	</entry>
</feed>