|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use FileSystem | |
---|---|
org.apache.hadoop.fs | An abstract file system API. |
org.apache.hadoop.fs.ftp | |
org.apache.hadoop.fs.s3 | A distributed, block-based implementation of FileSystem that uses Amazon S3
as a backing store. |
org.apache.hadoop.fs.s3native |
A distributed implementation of FileSystem for reading and writing files on
Amazon S3. |
org.apache.hadoop.fs.viewfs | |
org.apache.hadoop.io | Generic i/o code for use when reading and writing data to the network, to databases, and to files. |
org.apache.hadoop.mapred | |
org.apache.hadoop.mapred.lib | |
org.apache.hadoop.mapred.lib.db | |
org.apache.hadoop.mapreduce | |
org.apache.hadoop.mapreduce.lib.input |
Uses of FileSystem in org.apache.hadoop.fs |
---|
Subclasses of FileSystem in org.apache.hadoop.fs | |
---|---|
class |
ChecksumFileSystem
Abstract Checksumed FileSystem. |
class |
FilterFileSystem
A FilterFileSystem contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality. |
class |
LocalFileSystem
Implement the FileSystem API for the checksumed local filesystem. |
class |
RawLocalFileSystem
Implement the FileSystem API for the raw local filesystem. |
Fields in org.apache.hadoop.fs declared as FileSystem | |
---|---|
protected FileSystem |
FilterFileSystem.fs
|
protected FileSystem |
TrashPolicy.fs
|
Methods in org.apache.hadoop.fs that return FileSystem | |
---|---|
static FileSystem |
FileSystem.get(Configuration conf)
Returns the configured filesystem implementation. |
static FileSystem |
FileSystem.get(URI uri,
Configuration conf)
Returns the FileSystem for this URI's scheme and authority. |
static FileSystem |
FileSystem.get(URI uri,
Configuration conf,
String user)
Get a filesystem instance based on the uri, the passed configuration and the user |
FileSystem[] |
FilterFileSystem.getChildFileSystems()
|
FileSystem[] |
FileSystem.getChildFileSystems()
Get all the immediate child FileSystems embedded in this FileSystem. |
FileSystem |
Path.getFileSystem(Configuration conf)
Return the FileSystem that owns this Path. |
protected static FileSystem |
FileSystem.getFSofPath(Path absOrFqPath,
Configuration conf)
|
static FileSystem |
FileSystem.getNamed(String name,
Configuration conf)
Deprecated. call #get(URI,Configuration) instead. |
FileSystem |
LocalFileSystem.getRaw()
|
FileSystem |
FilterFileSystem.getRawFileSystem()
Get the raw file system |
FileSystem |
ChecksumFileSystem.getRawFileSystem()
get the raw file system |
static FileSystem |
FileSystem.newInstance(Configuration conf)
Returns a unique configured filesystem implementation. |
static FileSystem |
FileSystem.newInstance(URI uri,
Configuration conf)
Returns the FileSystem for this URI's scheme and authority. |
static FileSystem |
FileSystem.newInstance(URI uri,
Configuration conf,
String user)
Returns the FileSystem for this URI's scheme and authority and the passed user. |
Methods in org.apache.hadoop.fs that return types with arguments of type FileSystem | |
---|---|
static Class<? extends FileSystem> |
FileSystem.getFileSystemClass(String scheme,
Configuration conf)
|
Methods in org.apache.hadoop.fs with parameters of type FileSystem | |
---|---|
static boolean |
FileUtil.copy(File src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
Configuration conf)
Copy local files to a FileSystem. |
static boolean |
FileUtil.copy(FileSystem srcFS,
FileStatus srcStatus,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
Copy files between FileSystems. |
static boolean |
FileUtil.copy(FileSystem srcFS,
Path[] srcs,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
|
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
File dst,
boolean deleteSource,
Configuration conf)
Copy FileSystem files to local files. |
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
boolean overwrite,
Configuration conf)
Copy files between FileSystems. |
static boolean |
FileUtil.copy(FileSystem srcFS,
Path src,
FileSystem dstFS,
Path dst,
boolean deleteSource,
Configuration conf)
Copy files between FileSystems. |
static boolean |
FileUtil.copyMerge(FileSystem srcFS,
Path srcDir,
FileSystem dstFS,
Path dstFile,
boolean deleteSource,
Configuration conf,
String addString)
Copy all files in a directory to one output file (merge). |
static FSDataOutputStream |
FileSystem.create(FileSystem fs,
Path file,
FsPermission permission)
create a file with the provided permission The permission of the file is set to be the provided permission as in setPermission, not permission&~umask It is implemented using two RPCs. |
static void |
FileUtil.fullyDelete(FileSystem fs,
Path dir)
Deprecated. Use delete(Path, boolean) |
static TrashPolicy |
TrashPolicy.getInstance(Configuration conf,
FileSystem fs,
Path home)
Get an instance of the configured TrashPolicy based on the value of the configuration parameter fs.trash.classname. |
abstract void |
TrashPolicy.initialize(Configuration conf,
FileSystem fs,
Path home)
Used to setup the trash policy. |
Path |
Path.makeQualified(FileSystem fs)
Deprecated. |
static boolean |
FileSystem.mkdirs(FileSystem fs,
Path dir,
FsPermission permission)
create a directory with the provided permission The permission of the directory is set to be the provided permission as in setPermission, not permission&~umask |
static boolean |
Trash.moveToAppropriateTrash(FileSystem fs,
Path p,
Configuration conf)
In case of the symlinks or mount points, one has to move the appropriate trashbin in the actual volume of the path p being deleted. |
Method parameters in org.apache.hadoop.fs with type arguments of type FileSystem | |
---|---|
static org.apache.hadoop.fs.FileSystem.Statistics |
FileSystem.getStatistics(String scheme,
Class<? extends FileSystem> cls)
Get the statistics for a particular file system |
Constructors in org.apache.hadoop.fs with parameters of type FileSystem | |
---|---|
ChecksumFileSystem(FileSystem fs)
|
|
FilterFileSystem(FileSystem fs)
|
|
LocalFileSystem(FileSystem rawLocalFileSystem)
|
|
Trash(FileSystem fs,
Configuration conf)
Construct a trash can accessor for the FileSystem provided. |
Uses of FileSystem in org.apache.hadoop.fs.ftp |
---|
Subclasses of FileSystem in org.apache.hadoop.fs.ftp | |
---|---|
class |
FTPFileSystem
A FileSystem backed by an FTP client provided by Apache Commons Net. |
Uses of FileSystem in org.apache.hadoop.fs.s3 |
---|
Subclasses of FileSystem in org.apache.hadoop.fs.s3 | |
---|---|
class |
S3FileSystem
A block-based FileSystem backed by
Amazon S3. |
Uses of FileSystem in org.apache.hadoop.fs.s3native |
---|
Subclasses of FileSystem in org.apache.hadoop.fs.s3native | |
---|---|
class |
NativeS3FileSystem
A FileSystem for reading and writing files stored on
Amazon S3. |
Uses of FileSystem in org.apache.hadoop.fs.viewfs |
---|
Subclasses of FileSystem in org.apache.hadoop.fs.viewfs | |
---|---|
class |
ViewFileSystem
ViewFileSystem (extends the FileSystem interface) implements a client-side mount table. |
Methods in org.apache.hadoop.fs.viewfs that return FileSystem | |
---|---|
FileSystem[] |
ViewFileSystem.getChildFileSystems()
|
Uses of FileSystem in org.apache.hadoop.io |
---|
Methods in org.apache.hadoop.io with parameters of type FileSystem | |
---|---|
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
int bufferSize,
short replication,
long blockSize,
boolean createParent,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated. |
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
int bufferSize,
short replication,
long blockSize,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
CompressionCodec codec,
Progressable progress,
org.apache.hadoop.io.SequenceFile.Metadata metadata)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
static org.apache.hadoop.io.SequenceFile.Writer |
SequenceFile.createWriter(FileSystem fs,
Configuration conf,
Path name,
Class keyClass,
Class valClass,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType,
Progressable progress)
Deprecated. Use SequenceFile.createWriter(Configuration, Writer.Option...)
instead. |
static void |
MapFile.delete(FileSystem fs,
String name)
Deletes the named map file. |
static void |
BloomMapFile.delete(FileSystem fs,
String name)
|
static long |
MapFile.fix(FileSystem fs,
Path dir,
Class<? extends Writable> keyClass,
Class<? extends Writable> valueClass,
boolean dryrun,
Configuration conf)
This method attempts to fix a corrupt MapFile by re-creating its index. |
static void |
MapFile.rename(FileSystem fs,
String oldName,
String newName)
Renames an existing map directory. |
Uses of FileSystem in org.apache.hadoop.mapred |
---|
Methods in org.apache.hadoop.mapred that return FileSystem | |
---|---|
FileSystem |
JobClient.getFs()
Get a filesystem handle. |
Methods in org.apache.hadoop.mapred with parameters of type FileSystem | |
---|---|
protected void |
FileInputFormat.addInputPathRecursively(List<FileStatus> result,
FileSystem fs,
Path path,
PathFilter inputFilter)
Add files in the input path recursively into the results. |
void |
OutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
Check for validity of the output-specification for the job. |
void |
FileOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
SequenceFileAsBinaryOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
static org.apache.hadoop.io.MapFile.Reader[] |
MapFileOutputFormat.getReaders(FileSystem ignored,
Path dir,
Configuration conf)
Open the output generated by this format. |
RecordWriter<K,V> |
OutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
Get the RecordWriter for the given job. |
RecordWriter<WritableComparable,Writable> |
MapFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
SequenceFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
abstract RecordWriter<K,V> |
FileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
TextOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<BytesWritable,BytesWritable> |
SequenceFileAsBinaryOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
static boolean |
JobClient.isJobDirValid(Path jobDirPath,
FileSystem fs)
Checks if the job directory is clean and has all the required components for (re) starting the job |
protected boolean |
TextInputFormat.isSplitable(FileSystem fs,
Path file)
|
protected boolean |
FileInputFormat.isSplitable(FileSystem fs,
Path filename)
Is the given filename splitable? Usually, true, but if the file is stream compressed, it will not be. |
protected boolean |
KeyValueTextInputFormat.isSplitable(FileSystem fs,
Path file)
|
protected boolean |
FixedLengthInputFormat.isSplitable(FileSystem fs,
Path file)
|
Uses of FileSystem in org.apache.hadoop.mapred.lib |
---|
Fields in org.apache.hadoop.mapred.lib declared as FileSystem | |
---|---|
protected FileSystem |
CombineFileRecordReader.fs
|
Methods in org.apache.hadoop.mapred.lib with parameters of type FileSystem | |
---|---|
void |
FilterOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
LazyOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
NullOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
protected abstract RecordWriter<K,V> |
MultipleOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
protected RecordWriter<K,V> |
MultipleSequenceFileOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
protected RecordWriter<K,V> |
MultipleTextOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
RecordWriter<K,V> |
FilterOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
LazyOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
MultipleOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
Create a composite record writer that can write key/value data to different output files |
RecordWriter<K,V> |
NullOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
protected boolean |
CombineFileInputFormat.isSplitable(FileSystem fs,
Path file)
|
Uses of FileSystem in org.apache.hadoop.mapred.lib.db |
---|
Methods in org.apache.hadoop.mapred.lib.db with parameters of type FileSystem | |
---|---|
void |
DBOutputFormat.checkOutputSpecs(FileSystem filesystem,
JobConf job)
Check for validity of the output-specification for the job. |
RecordWriter<K,V> |
DBOutputFormat.getRecordWriter(FileSystem filesystem,
JobConf job,
String name,
Progressable progress)
Get the RecordWriter for the given job. |
Uses of FileSystem in org.apache.hadoop.mapreduce |
---|
Methods in org.apache.hadoop.mapreduce that return FileSystem | |
---|---|
FileSystem |
Cluster.getFileSystem()
Get the file system where job-specific files are stored |
Methods in org.apache.hadoop.mapreduce with parameters of type FileSystem | |
---|---|
org.apache.hadoop.mapreduce.JobSubmitter |
Job.getJobSubmitter(FileSystem fs,
org.apache.hadoop.mapreduce.protocol.ClientProtocol submitClient)
Only for mocking via unit tests. |
Uses of FileSystem in org.apache.hadoop.mapreduce.lib.input |
---|
Fields in org.apache.hadoop.mapreduce.lib.input declared as FileSystem | |
---|---|
protected FileSystem |
CombineFileRecordReader.fs
|
Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type FileSystem | |
---|---|
protected void |
FileInputFormat.addInputPathRecursively(List<FileStatus> result,
FileSystem fs,
Path path,
PathFilter inputFilter)
Add files in the input path recursively into the results. |
protected BlockLocation[] |
CombineFileInputFormat.getFileBlockLocations(FileSystem fs,
FileStatus stat)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |