bash
derivatives. Several "login scripts" are loaded in the new shell. All that means is that the shell commands in those files are run before it gives you a prompt and lets you start typing your own commands. These generally have names like .bashrc
or .zshrc
that depend on your shell. (But it can get complicated with some login scripts run globally when anyone logs into a computer and some only run for your user account.)
$PATH
python3
via at the command line, your shell searches all file directories listed in your $PATH
in order to execute that command. Errors such as "command not found" when you try to run a program mean you need to add the directory containing that program to your PATH. To show the current directories that are in your
$PATH
use this:
echo $PATH
$PATH
you can run this command
PATH=/your/directory/here:$PATH</code>
$PATH
part of this. If you leave them off then your shell will not know where to look for built-in commands like ls
, cd
, etc.!
Generally, you want to add the given directory to the end or the beginning of your PATH variable list, since when you invoke a command, the directories will be searched from beginning to end and the first match will be the one that is run. Because this can lead to confusion, there is even a command you can use that gives you the path to the executable that will be run if you type a command:
which <command>
ls6.tacc.utexas.edu
, so to ssh to it you use:
ssh <username>@ls6.tacc.utexas.edu
idev -m 60
-m 60
is asking for a 60-minute slot on one compute node. Currently, you can make this as high as 120 minutes. For longer jobs, you will need to learn about submitting jobs to the queue.
After some informational messages, your terminal will pop up and now you can run commands on the COMPUTE NODE. They have a lot of cores (processors) and memory (RAM), so you can (and should) be running many jobs in parallel on one of these nodes if you are using it for compute. The idev
command is mostly meant for development (that is, writing and testing new code/tools), but it can be used for short tasks, particularly if you are using a job manager like Snakemake that can intelligently use the resources.
If you get lost and can't remember if you are on the HEAD NODE or a COMPUTE NODE, you can use this command:
hostname
$HOME
, $WORK
, and $SCRATCH
mamba
everywhere you would use conda
for running commands. Bioconda makes it possible to install additional packages related to bioinformatics and computational biology. You'll want all three of these working together in your environment.
base
environment.
conda install mamba mamba init
base
conda environment will be loaded.
It's OK to install some general-purpose utilities in this environment, but you should generally *install each of your major bioinformatics tools (or sets of tools) in its own environment*.
This sequence of commands creates an environment called breseq-env
and installs breseq in it:
mamba env create -n breseq-env mamba activate breseq-env mamba install breseq
mamba install breseq=0.36.1
yaml
file:
conda env export > environment.yml
yaml
file created by someone else, so you can reproduce their work!
conda env create -f environment.yml
sshl
to connect to lonestar6 using your username.
Barrick Lab > ComputationList > ProtocolsComputingEnvironmentSetup