Hadoop is frequently the first thing that comes to mind when "Big Data" is being discussed. Arguably the most common Map/Reduce software system available, its use with "Scientific Big Data" (SBD) frequently spawns questions regarding its use on HPC systems. In this tutorial we provide an overview of using Hadoop 2 with YARN - from multiple distributions - on a Cray Cluster Solutions systems. Topics to be discussed include simple software installation and configuration on the Cray CS300 cluster supercomputer, performance tuning opportunities, and suggested cluster system configurations. The use of Hadoop's Distributed File System (HDFS) with SBD will be discussed, including discussion on topics such as choices in storage technologies to strategies for using Hadoop with existing SBD information and storage formats. In the final hour, Judy Qiu from Indiana University will conclude the Hadoop tutorial: Many scientific applications are data intensive. It is estimated that organizations with high end computing infrastructures and data centers are doubling the amount of data that they are archiving every year. Twister extends Hadoop MapReduce enabling HPC-Cloud Interoperability. We show how to apply Twister to support large-scale iterative computations that are common in many important data mining and machine learning applications.