UCSD logo UCSD logo
Home » About » Known Issues

The CIPRES Science Gateway will be unavailable Tuesday, February 21, 2017, 9 AM - 5 PM Pacific time. Jobs that might enter the maintenance period will be held in the queue until maintenance is complete. We regret any inconvenience

KNOWN ISSUES

GENERAL FUNCTIONALITY:

What if the job completes and I don't see my tree? Apart from an obvious job failure which can be diagnosed using stdout.txt and stderr.txt files, there are two principal reasons why expected results files may not appear. The first is the job reached its wall time limit before it completed. If this is the case, the job scheduler_stderr.txt will contain a message like this: =>> PBS: job killed: walltime 1835 exceeded limit 1800. You can fix this by "cloning" the job, opening the parameter pane, and increasing the maximum allowed run time. The second is that the job failed due to a transient system problem (this should be rare), or a specific known issue with the code (see the text for specific codes below). After checking to be sure the ob did not hit its wall time limit, please report any jobs that fail to produce the expected results to us.

Intermediate Results: The CIPRES Science Gateway supports delivery of intermediate results. To open the intermediate results page, just click on the “View Status” button. When that page opens, you will see an “Intermediate results" hyperlink. Click it and you will have access to all run files. The link to "intermediate results" will show your submitted input files and scheduling information files as soon as you submit the job.

What if I don’t see intermediate results? If you see only input files and scheduling files, but no true intermediate files, it means your job is waiting in the queue. If the job has begun to execute, the first file that appears is start.txt, which notes the beginning of the run time. The other intermediate files will appear shortly after. If you request a short job (0.5 h or so) the job goes into a debug queue, and it would usually start within 15 minutes. For such jobs, you can keep the intermediate file page open and refresh from time to time to see when the intermediate fields appear.

For a long job, the queue time can be long, and it may take hours for the job to start. In that case, it may seem like you are getting no intermediate results, but it really means there aren’t any results to see yet. The intermediate results for a given job are only visible while the job is actually executing. Once the job is complete, clicking the intermediate file link will show a blank screen.

Results Delivery: Results delivery is far more robust in the current CIPRES Science Gateway than in its prior incarnations. Sometimes (hopefully rarely) a machine will go offline during a run, and if that happens, you will not be able to see intermediate results.The CIPRES Science Gateway application will detect when the machine comes back on line, and deliver the results within 30 minutes of when they become available.  If you should experience a problem with return of results, please contact us, and we will try to recover your results. It’s always helpful if you provide the job handle when asking about a job. Instructions for finding the job handle are found on the Gateway front page.

Large Results Delivery: Sometime codes produce very large results, especially BEAST and PhyloBayes. Usually this is because users are requesting way more samples than are required. Typically 10,000 samples is adequate for Bayesian analysis. When results files exceed 4 GB in size, they cannot be returned via the conventional web interface. Instead, they are placed in special location for large files. The user is notified of the problem, and is provided with a link to their results.

Multiple logins: If a user is logged in to the application more than once, submitted jobs will fail. The error message produced (under the View Error button)will look like this:

Tue Jan 19 10:40:34 PST 2010 > INPUTSTAGING : ERROR : NGBW-JOB-yourjobname-somenumbersandletters : Cannot configure FileHandler!

GOOGLE Chrome issue: The GOOGLE Chrome browser has issues with data uploading and job creation. We expect to address this, but it is not a first priority today. Please let us know if you feel use of the Chrome Browser is essential for your work.

BEAST:

The BEAST User Group is here:

Run Failures: Some jobs fail to run because of an incompatibility with BEAGLE. The error is in  BEAST, and the developers are working on these issues. In the meantime you can run (slowly) using
BEAST without BEAGLE, just check the box in the parameter page that says: Do not use Beagle.

MrBayes:

The MrBayes User Forum is here:

MrBayes Blocks: The interface for MrBayes will overwrite values in the MrBayes block UNLESS the interface is deactivated by checking the box that says "My Nexus file has a MrBayes block." There is no planned fix for this issue, user caution is required.

Set autoclose = no : A "set autoclose = no" statement in the MrBayes block hangs the node, and it has to be restarted manually. This is wasteful of time and resources. Please be sure your MB block has no statement, or says set autoclose = yes.

Runs crash without explanation: A very small percentage of MB jobs start, but fail quickly, but without obvious error message.Usually this is cured by unchecking the box that says "Use BEAGLE" in the interface. We haven't figured out what the issue is yet, but this has cured all such problems so far. Although it is not an obvious error message, this kind of failure with MrBayes seems to produce an error message like this: [gcn-19-65.sdsc.edu:75624] *** An error occurred in MPI_Waitall

Command changes in MB 3.2: Several users have reported that cloned jobs fail with the new MrBayes 3.2.1 code. Many of these cases are the result of command line changes in the new MB 3.2.1 code relative to MB 3.1.2. The changes we have seen are as follows:

The parameter 'startingtrees' should be replaced with 'starttree'

The parameter 'printtofile' has been eliminated, and must be removed from your input file.

The parameter 'displaygeq' must be replaced with 'minpartfreq'

These changes will impact anyone who configures their MB run through a MrBayes block in their Nexus file. If you encounter this issue, just download the infile.nex file and edit the MrBayes block appropriately, re-upload, and your run should go fine. If you encounter other command changes not listed here, please let us know.

RAXML:

The RAxML User Group is here.

Input Format: RAxML on the CIPRES Science Gateway only accepts input files in relaxed phylip form.

Multiple Outgroups: When specifying more than one outgroup, do not introduce blank spaces. Use outgroup1,outgroup2 not outgroup1, outgroup2. In the latter case you will see the following error message “Error, you must specify a model of substitution with the ‘-m’ option”

Constraints: RAxML is a bit finicky about its constraint requirements. The constraint tree must include all taxa, an outgroup must be specified, and there must be no stray white space in the uploaded constraint tree.

GARLI:

The Garli User Forum is here:


 For unknown characters the default is "?", although with Nexus you can define it to be whatever you want.  Thus, a Nexus dataset that uses only X and not ? could be read fine by adding missing=X to the Format line.  If the dataset is in non-Nexus format (usually Phylip), then the only options are ? and -.  In all cases the gap character, by default "-", is treated identically to a missing character.

Managing multiple files produced by GARLI: GARLI will produce one tree file for each bootstrap iteration, but it will not calculate a majority rule consensus tree the way the original GARLI interface did. We plan to address this in the near future, but in the mean time, you can calculate the consensus tree using Consense in the CIPRES Science Gateway, or one of many other available tools, including SumTrees. The data interface allows you to download all result files from a single job to your local hard drive as a collection.

If there is a tool or a feature you need, please let us know.


CIPRES – Cyberinfrastructure for Phylogenic Research