|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use TaskAttemptContext | |
---|---|
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.db | |
org.apache.hadoop.mapreduce.lib.input | |
org.apache.hadoop.mapreduce.lib.output |
Uses of TaskAttemptContext in org.apache.hadoop.mapred |
---|
Subclasses of TaskAttemptContext in org.apache.hadoop.mapred | |
---|---|
class |
TaskAttemptContext
|
Methods in org.apache.hadoop.mapred with parameters of type TaskAttemptContext | |
---|---|
void |
OutputCommitter.abortTask(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
void |
OutputCommitter.commitTask(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
boolean |
OutputCommitter.needsTaskCommit(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
void |
OutputCommitter.setupTask(TaskAttemptContext taskContext)
This method implements the new interface by calling the old method. |
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce |
---|
Subclasses of TaskAttemptContext in org.apache.hadoop.mapreduce | |
---|---|
class |
MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context that is given to the Mapper . |
class |
Mapper.Context
|
class |
ReduceContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
The context passed to the Reducer . |
class |
Reducer.Context
|
class |
TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
A context object that allows input and output from the task. |
Methods in org.apache.hadoop.mapreduce with parameters of type TaskAttemptContext | |
---|---|
abstract void |
OutputCommitter.abortTask(TaskAttemptContext taskContext)
Discard the task output |
abstract void |
RecordWriter.close(TaskAttemptContext context)
Close this RecordWriter to future operations. |
abstract void |
OutputCommitter.commitTask(TaskAttemptContext taskContext)
To promote the task's temporary output to final output location The task's output is moved to the job's output directory. |
abstract RecordReader<K,V> |
InputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
abstract OutputCommitter |
OutputFormat.getOutputCommitter(TaskAttemptContext context)
Get the output committer for this output format. |
abstract RecordWriter<K,V> |
OutputFormat.getRecordWriter(TaskAttemptContext context)
Get the RecordWriter for the given task. |
abstract void |
RecordReader.initialize(InputSplit split,
TaskAttemptContext context)
Called once at initialization. |
abstract boolean |
OutputCommitter.needsTaskCommit(TaskAttemptContext taskContext)
Check whether task needs a commit |
abstract void |
OutputCommitter.setupTask(TaskAttemptContext taskContext)
Sets up output for the task. |
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.db |
---|
Methods in org.apache.hadoop.mapreduce.lib.db with parameters of type TaskAttemptContext | |
---|---|
void |
DBOutputFormat.DBRecordWriter.close(TaskAttemptContext context)
Close this RecordWriter to future operations. |
RecordReader<LongWritable,T> |
DBInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for a given split. |
OutputCommitter |
DBOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
RecordWriter<K,V> |
DBOutputFormat.getRecordWriter(TaskAttemptContext context)
Get the RecordWriter for the given task. |
void |
DBRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.input |
---|
Fields in org.apache.hadoop.mapreduce.lib.input declared as TaskAttemptContext | |
---|---|
protected TaskAttemptContext |
CombineFileRecordReader.context
|
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type TaskAttemptContext | |
---|---|
RecordReader<K,V> |
DelegatingInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<BytesWritable,BytesWritable> |
SequenceFileAsBinaryInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
abstract RecordReader<K,V> |
CombineFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
This is not implemented yet. |
RecordReader<K,V> |
SequenceFileInputFilter.createRecordReader(InputSplit split,
TaskAttemptContext context)
Create a record reader for the given split |
RecordReader<LongWritable,Text> |
NLineInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
RecordReader<K,V> |
SequenceFileInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<LongWritable,Text> |
TextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.createRecordReader(InputSplit split,
TaskAttemptContext context)
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.createRecordReader(InputSplit genericSplit,
TaskAttemptContext context)
|
void |
SequenceFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
SequenceFileAsTextRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
DelegatingRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
LineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context)
|
void |
CombineFileRecordReader.initialize(InputSplit split,
TaskAttemptContext context)
|
void |
KeyValueLineRecordReader.initialize(InputSplit genericSplit,
TaskAttemptContext context)
|
Constructors in org.apache.hadoop.mapreduce.lib.input with parameters of type TaskAttemptContext | |
---|---|
CombineFileRecordReader(CombineFileSplit split,
TaskAttemptContext context,
Class<? extends RecordReader<K,V>> rrClass)
A generic RecordReader that can hand out different recordReaders for each chunk in the CombineFileSplit. |
|
DelegatingRecordReader(InputSplit split,
TaskAttemptContext context)
Constructs the DelegatingRecordReader. |
Uses of TaskAttemptContext in org.apache.hadoop.mapreduce.lib.output |
---|
Methods in org.apache.hadoop.mapreduce.lib.output with parameters of type TaskAttemptContext | |
---|---|
void |
FileOutputCommitter.abortTask(TaskAttemptContext context)
Delete the work directory |
void |
FilterOutputFormat.FilterRecordWriter.close(TaskAttemptContext context)
|
void |
TextOutputFormat.LineRecordWriter.close(TaskAttemptContext context)
|
void |
FileOutputCommitter.commitTask(TaskAttemptContext context)
Move the files from the work directory to the job output directory |
Path |
FileOutputFormat.getDefaultWorkFile(TaskAttemptContext context,
String extension)
Get the default path and filename for the output format. |
OutputCommitter |
FileOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
OutputCommitter |
FilterOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
OutputCommitter |
LazyOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
OutputCommitter |
NullOutputFormat.getOutputCommitter(TaskAttemptContext context)
|
abstract RecordWriter<K,V> |
FileOutputFormat.getRecordWriter(TaskAttemptContext job)
|
RecordWriter<BytesWritable,BytesWritable> |
SequenceFileAsBinaryOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<K,V> |
FilterOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<K,V> |
LazyOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<K,V> |
SequenceFileOutputFormat.getRecordWriter(TaskAttemptContext context)
|
RecordWriter<K,V> |
TextOutputFormat.getRecordWriter(TaskAttemptContext job)
|
RecordWriter<K,V> |
NullOutputFormat.getRecordWriter(TaskAttemptContext context)
|
protected SequenceFile.Writer |
SequenceFileAsBinaryOutputFormat.getSequenceWriter(TaskAttemptContext context,
Class<?> keyClass,
Class<?> valueClass)
|
static String |
FileOutputFormat.getUniqueFile(TaskAttemptContext context,
String name,
String extension)
Generate a unique filename, based on the task id, name, and extension |
boolean |
FileOutputCommitter.needsTaskCommit(TaskAttemptContext context)
Did this task write any files in the work directory? |
void |
FileOutputCommitter.setupTask(TaskAttemptContext context)
No task setup required. |
Constructors in org.apache.hadoop.mapreduce.lib.output with parameters of type TaskAttemptContext | |
---|---|
FileOutputCommitter(Path outputPath,
TaskAttemptContext context)
Create a file output committer |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |