Hadoop InputFormat
- InputFormatdefines, How the input files are split up and read in Hadoop.
- An Hadoop InputFormat is the first component in Map-Reduce, it is responsible for creating the input splits and dividing them into records.
- Initially, the data for a MapReduce task is stored in input files, and input files typically reside in HDFS.
- Although these files format is arbitrary, line-based log files and binary format can be used.
- Using InputFormat we define how these input files are split and read.
- The InputFormat class is one of the fundamental classes in the Hadoop MapReduce framewor.
Types of InputFormat in MapReduce
1. FileInputFormat in Hadoop
It is the base class for all file-based InputFormats. Hadoop FileInputFormat specifies input directory where data files are located. When we start a Hadoop job, FileInputFormat is provided with a path containing files to read. FileInputFormat will read all files and divides these files into one or more InputSplits.
2. TextInputFormat
It is the default InputFormat of MapReduce. TextInputFormat treats each line of each input file as a separate record and performs no parsing. This is useful for unformatted data or line-based records like log files.
- Key – It is the byte offset of the beginning of the line within the file (not whole file just one split), so it will be unique if combined with the file name.
- Value – It is the contents of the line, excluding line terminators.
3. KeyValueTextInputFormat
It is similar to TextInputFormat as it also treats each line of input as a separate record. While TextInputFormat treats entire line as the value, but the KeyValueTextInputFormat breaks the line itself into key and value by a tab character (‘/t’). Here Key is everything up to the tab character while the value is the remaining part of the line after tab character.
4. SequenceFileInputFormat
Hadoop SequenceFileInputFormat is an InputFormat which reads sequence files. Sequence files are binary files that stores sequences of binary key-value pairs. Sequence files block-compress and provide direct serialization and deserialization of several arbitrary data types (not just text). Here Key & Value both are user-defined.
5. SequenceFileAsTextInputFormat
Hadoop SequenceFileAsTextInputFormat is another form of SequenceFileInputFormat which converts the sequence file key values to Text objects. By calling ‘tostring()’ conversion is performed on the keys and values. This InputFormat makes sequence files suitable input for streaming.
6. SequenceFileAsBinaryInputFormat
Hadoop SequenceFileAsBinaryInputFormat is a SequenceFileInputFormat using which we can extract the sequence file’s keys and values as an opaque binary object.
7. NLineInputFormat
Hadoop NLineInputFormat is another form of
TextInputFormat where the keys are byte offset of the line and values
are contents of the line. Each mapper receives a variable number of
lines of input with TextInputFormat and KeyValueTextInputFormat and the
number depends on the size of the split and the length of the lines. And
if we want our mapper to receive a fixed number of lines of input, then
we use NLineInputFormat.
N is the number of lines of input that each mapper receives. By default
(N=1), each mapper receives exactly one line of input. If N=2, then each
split contains two lines. One mapper will receive the first two
Key-Value pairs and another mapper will receive the second two key-value
pairs.
8. DBInputFormat
Hadoop DBInputFormat is an InputFormat that reads data from a relational database, using JDBC. As it doesn’t have portioning capabilities, so we need to careful not to swamp the database from which we are reading too many mappers. So it is best for loading relatively small datasets, perhaps for joining with large datasets from HDFS using MultipleInputs. Here Key is LongWritables while Value is DBWritables.
Hadoop Output Format
As we know, Reducer takes as input a set of an intermediate key-value pair produced by the mapper and runs a reducer function on them to generate output that is again zero or more key-value pairs.
RecordWriter writes these output key-value pairs from the Reducer phase to output files.
Types of Hadoop Output Formats
i. TextOutputFormat
MapReduce default Hadoop reducer Output Format is TextOutputFormat, which writes (key, value) pairs on individual lines of text files and its keys and values can be of any type since TextOutputFormat turns them to string by calling toString() on them. Each key-value pair is separated by a tab character, which can be changed using MapReduce.output.textoutputformat.separator property. KeyValueTextOutputFormat is used for reading these output text files since it breaks lines into key-value pairs based on a configurable separator.
ii. SequenceFileOutputFormat
It is an Output Format which writes sequences files for its output and it is intermediate format use between MapReduce jobs, which rapidly serialize arbitrary data types to the file; and the corresponding SequenceFileInputFormat will deserialize the file into the same types and presents the data to the next mapper in the same manner as it was emitted by the previous reducer, since these are compact and readily compressible. Compression is controlled by the static methods on SequenceFileOutputFormat.
iii. SequenceFileAsBinaryOutputFormat
It is another form of SequenceFileInputFormat which writes keys and values to sequence file in binary format.
iv. MapFileOutputFormat
It is another form of FileOutputFormat in Hadoop Output Format, which
is used to write output as map files. The key in a MapFile must be
added in order, so we need to ensure that reducer emits keys in sorted
order.
Any doubt yet in Hadoop Oputput Format? Please Ask.
v. MultipleOutputs
It allows writing data to files whose names are derived from the output keys and values, or in fact from an arbitrary string.
vi. LazyOutputFormat
Sometimes FileOutputFormat will create output files, even if they are
empty. LazyOutputFormat is a wrapper OutputFormat which ensures that
the output file will be created only when the record is emitted for a
given partition.
vii. DBOutputFormat
DBOutputFormat in Hadoop is an Output Format for writing to relational databases and HBase. It sends the reduce output to a SQL table. It accepts key-value pairs, where the key has a type extending DBwritable. Returned RecordWriter writes only the key to the database with a batch SQL query.
REF
https://data-flair.training/blogs/hadoop-inputformat/
No comments:
Post a Comment