The subsequent 2 lessons are all about files processing mechnisms. We’ll start with batch processing and MapReduce, in most cases with Hadoop. This is thought-regarding the first gen of files processing. From there we are going to accelerate to stram processing, in most cases executed with Spark. These matters are deeply connected. As an illustration, Spark can operate on HDFS which is the file system for Hadoop. Regardless that it would appear outdated to discover about batch processing with Hadoop, it’s good to arrangement shut the field even ought to you intend to dwell the streaming files lifestyles.