You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using Hadoop 1.0 via Cloudera CDH 4.6 and am attempting to run the AMPLab benchmark on Hive.
I completed the scan, aggregate and join query sets successfully. But I am running into failures when running the external query. This is the output for the query:
ogging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/lib/hive/lib/hive-common-0.10.0-cdh4.6.0.jar!/hive-log4j.properties
Hive history file=/tmp/hdfs/hive_job_log_63d59f7d-5207-42a4-aee5-2baa7695cafd_81302959.txt
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201407241331_0006, Tracking URL = http://hadoop1.sdcorp.global.sandisk.com:50030/jobdetails.jsp?jobid=job_201407241331_0006
Kill Command = /opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/lib/hadoop/bin/hadoop job -kill job_201407241331_0006
Hadoop job information for Stage-1: number of mappers: 498; number of reducers: 0
2014-07-25 10:32:14,277 Stage-1 map = 0%, reduce = 0%
2014-07-25 10:33:01,617 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201407241331_0006 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://hadoop1.sdcorp.global.sandisk.com:50030/jobdetails.jsp?jobid=job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000499 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000039 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000032 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000006 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000016 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000062 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000046 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000061 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000018 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000025 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000064 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000033 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000059 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000245 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000056 (and more) from job job_201407241331_0006
Hello,
I am using Hadoop 1.0 via Cloudera CDH 4.6 and am attempting to run the AMPLab benchmark on Hive.
I completed the scan, aggregate and join query sets successfully. But I am running into failures when running the external query. This is the output for the query:
ogging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/lib/hive/lib/hive-common-0.10.0-cdh4.6.0.jar!/hive-log4j.properties
Hive history file=/tmp/hdfs/hive_job_log_63d59f7d-5207-42a4-aee5-2baa7695cafd_81302959.txt
Total MapReduce jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201407241331_0006, Tracking URL = http://hadoop1.sdcorp.global.sandisk.com:50030/jobdetails.jsp?jobid=job_201407241331_0006
Kill Command = /opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/lib/hadoop/bin/hadoop job -kill job_201407241331_0006
Hadoop job information for Stage-1: number of mappers: 498; number of reducers: 0
2014-07-25 10:32:14,277 Stage-1 map = 0%, reduce = 0%
2014-07-25 10:33:01,617 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201407241331_0006 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://hadoop1.sdcorp.global.sandisk.com:50030/jobdetails.jsp?jobid=job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000499 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000039 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000032 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000006 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000016 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000062 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000046 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000061 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000018 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000025 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000064 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000033 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000059 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000245 (and more) from job job_201407241331_0006
Examining task ID: task_201407241331_0006_m_000056 (and more) from job job_201407241331_0006
Task with the most failures(4):
Task ID:
task_201407241331_0006_m_000032
URL:
http://hadoop1.sdcorp.global.sandisk.com:50030/taskdetails.jsp?jobid=job_201407241331_0006&tipid=task_201407241331_0006_m_000032
Diagnostic Messages for this Task:
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 498 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
Am I missing some setup for running the external query? Can someone help?
Thanks,
Madhura
The text was updated successfully, but these errors were encountered: