You can check the setup by looking at the Installing Torque 4.2.5 on Cent OS 6. You might want to follow up with this optional setup Adding and Specifying Compute Resources at Torque to make sure your cores count are correct.
Step 1b: Ensure the exchange keys between submission node and Torque Server For more information, see Auto SSH Login without Password Step 1c: Configure the submission node as a non-default queue (Optional) For more information, see Using Torque to set up a Queue to direct users to a subset of resources Step 2: Registering the Submission Node in Torque If you do not wish the compute node to be a compute resource, you can put a non-default queue or unique queue which users will not send to.
CREAM CLI user or ICE) along with their meanings For submissions to CREAM through the WMS, they might appear in the org.cream.jobmanagement.
BLParser Client - initialize Connection: getting info about BLParser (xxx) from BLAH (retry count=yy/zz) ... BLParser Client - initialize Connection error: cannot get BLParser (lsf) HOST: PORT information from BLAH.
The idea is to use TORQUE in a very minimal configuration.
There will be no fuzz with Maui or similar schedulers, we will only use packages we can get from the Debian/Ubuntu software repositories.
The user name defined in the user list is either undefined or different from the user name of the job submitter or the user UID and GID at the executing node are different from the ones in the submitting node.
Ubuntu might work just as well, since Ubuntu is very similar to Debian. -- Starting command on Tue Apr 19 2016 with 20856 GB free disk space qsub \ -l mem=126g -l nodes=1:ppn=32 \ -d `pwd` -N "meryl_1st_try" \ -t 1-1 \ -j oe -o /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/1st_trial/unitigging/0-mercounts/meryl.$PBS_\ /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/1st_trial/unitigging/0-mercounts/qsub: submit error (Bad UID for job execution MSG=ruserok failed validating scbbcluster/scbbcluster from r3node4) ## -- Finished on Tue Apr 19 2016 (3 seconds) with 20856 GB free disk space ERROR: ERROR: Failed with exit code 177. qsub \ -l mem=126g -l nodes=1:ppn=32 \ -d `pwd` -N "meryl_1st_try" \ -t 1-1 \ -j oe -o /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/1st_trial/unitigging/0-mercounts/meryl.$PBS_\ /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/1st_trial/unitigging/0-mercounts/By googling on this error "submit error (Bad UID for job execution MSG=ruserok failed validating scbbcluster/scbbcluster from r3node4)" I got some result and after trying those result I was able to overcome this error so posting here solution: Step 1: Do check that your Torque Server Configuration has the followings. Stack trace: at /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/lib/canu/line 234 canu:: Defaults::ca Failure('Failed to submit batch jobs', undef) called at /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/lib/canu/line 1139 canu:: Execution::submit Or Run Parallel Job('/home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-a...', '1st_try', 'meryl', '/home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-a...', 'meryl', 1) called at /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/lib/canu/line 370 canu:: Meryl::meryl Check('/home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-a...', '1st_try', 'utg') called at /home/scbbcluster/nitesh_mpi/pacbio_assembly/canu-1.2/Linux-amd64/bin/canu line 512 canu failed with 'Failed to submit batch jobs'.That is, if you request to repeat your simulation a lot of times (say, 100) issuing the command rfluka -N0 -M100 example, each process is launched serially, instead of utilizing all available cores on your PC.A solution can be to use a job queuing system and a scheduler.