Article - CS270287
Error "IllegalArgumentException: Positive number of slices required" is received when executing a cluster job in ThingWorx Analytics 8.0
Modified: 03-Oct-2017
Applies To
- ThingWorx Analytics 8.0
Description
- Following error response is received when executing a cluster job:
{
"resultId": 41,
"uri": "http://XXXX:8080/analytics/1.0/status/41",
"progress": 0,
"message": "FAILED",
"messageInfo": "IllegalArgumentException: Positive number of slices required",
"startTime": "2017-09-12T13:59:27.574Z",
"endTime": "2017-09-12T14:00:23.875Z",
"runTime": "0:00:56.301",
"queuedStartTime": "2017-09-12T13:59:26.999Z",
"queuedDuration": "0:00:00.575",
"dataset": "beanpro",
"jobType": "CLUSTER_DESCRIPTION_MODEL"
}
"resultId": 41,
"uri": "http://XXXX:8080/analytics/1.0/status/41",
"progress": 0,
"message": "FAILED",
"messageInfo": "IllegalArgumentException: Positive number of slices required",
"startTime": "2017-09-12T13:59:27.574Z",
"endTime": "2017-09-12T14:00:23.875Z",
"runTime": "0:00:56.301",
"queuedStartTime": "2017-09-12T13:59:26.999Z",
"queuedDuration": "0:00:00.575",
"dataset": "beanpro",
"jobType": "CLUSTER_DESCRIPTION_MODEL"
}
- Following error is reported in grid-worker.log:
java.lang.IllegalArgumentException: Positive number of slices required at org.apache.spark.rdd.ParallelCollectionRDD$.slice(ParallelCollectionRDD.scala:119) at org.apache.spark.rdd.ParallelCollectionRDD.getPartitions(ParallelCollectionRDD.scala:97) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:217) at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65) at org.apache.spark.api.java.JavaPairRDD.reduceByKey(JavaPairRDD.scala:535) at com.coldlight.ai.clusters.util.StatUtils.mapReduceFeatureCharacteristics(StatUtils.java:75) at com.coldlight.ai.clusters.util.StatUtils.featureCharacteristics(StatUtils.java:66) at com.coldlight.ai.clusters.ClusterCharacteristicService.makeCharacteristics(ClusterCharacteristicService.java:54) at com.coldlight.ai.clusters.ClusterCharacteristicService.makeCharacteristicsModel(ClusterCharacteristicService.java:28) at com.coldlight.ai.clusters.ClusterService.run(ClusterService.java:70) at com.coldlight.neuron.services.ai.ClusterJob.runJob(ClusterJob.java:58) at com.coldlight.neuron.job.NeuronJob.run(NeuronJob.java:123) at com.coldlight.ccc.job.dempsy.DempsyClusterJobExecutor$DempsyPersistentClusterWatcher.runJobUploadResultsAndCleanup(DempsyClusterJobExecutor.java:334) at com.coldlight.ccc.job.dempsy.DempsyClusterJobExecutor$DempsyPersistentClusterWatcher.execute(DempsyClusterJobExecutor.java:482) at com.coldlight.ccc.executor.PersistentTask.executeUntilWorks(PersistentTask.java:92) at com.coldlight.ccc.executor.PersistentTask.process(PersistentTask.java:58) at com.nokia.dempsy.cluster.zookeeper.ZookeeperSession$WatcherProxy.process(ZookeeperSession.java:279) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
This is a printer-friendly version of Article 270287 and may be out of date. For the latest version click CS270287