Apache Sparkは最近一気に脚光を浴び始めた。
IBMが何千者研究者を投入するニュースからそんなに時間が立ってないうち、
「BigInsights for Apache Hadoop V4.1」にSparkが使えるようになった。
Sparkの技術だけをマスターできれば、将来はご飯を食べていけるのではないか?
冗談ですけど、寝る前にちょっと勉強したいです。
★ダウンロード
apacheのホームページから、mavenとsparkのパッケージをダウンロードしてきます。
・apache-maven-3.3.3-bin.tar.gz
・spark-1.4.0.tgz(パッケージタイプ:Source Code)
↑ソースコードタイプを使って、自分でビルドすると、Sparkだけではなく、サンプルもダウンロードできます。
★mavenのセットアップは割愛。
解凍して配置して、PATHを通せばOK。
解凍コマンド:tar zxvf apache-maven-3.3.3-bin.tar.gz
$ mvn -version
Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 2015-04-22T20:57:37+09:00)
Maven home: /usr/local/apache-maven-3.3.3
Java version: 1.8.0_45, vendor: Oracle Corporation
Java home: /usr/java/jdk1.8.0_45/jre
Default locale: ja_JP, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-504.23.4.el6.x86_64", arch: "amd64", family: "unix"
$mvn -DskipTests clean package
…
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [03:00 min]
[INFO] Spark Launcher Project ............................. SUCCESS [01:11 min]
[INFO] Spark Project Networking ........................... SUCCESS [ 11.874 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 5.619 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 4.642 s]
[INFO] Spark Project Core ................................. SUCCESS [04:07 min]
[INFO] Spark Project Bagel ................................ SUCCESS [ 17.762 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 57.769 s]
[INFO] Spark Project Streaming ............................ SUCCESS [01:18 min]
[INFO] Spark Project Catalyst ............................. SUCCESS [01:26 min]
[INFO] Spark Project SQL .................................. SUCCESS [01:56 min]
[INFO] Spark Project ML Library ........................... SUCCESS [02:20 min]
[INFO] Spark Project Tools ................................ SUCCESS [ 10.926 s]
[INFO] Spark Project Hive ................................. SUCCESS [01:47 min]
[INFO] Spark Project REPL ................................. SUCCESS [ 35.895 s]
[INFO] Spark Project Assembly ............................. SUCCESS [01:06 min]
[INFO] Spark Project External Twitter ..................... SUCCESS [ 21.557 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [ 26.509 s]
[INFO] Spark Project External Flume ....................... SUCCESS [ 21.713 s]
[INFO] Spark Project External MQTT ........................ SUCCESS [ 47.694 s]
[INFO] Spark Project External ZeroMQ ...................... SUCCESS [ 16.291 s]
[INFO] Spark Project External Kafka ....................... SUCCESS [ 31.361 s]
[INFO] Spark Project Examples ............................. SUCCESS [01:55 min]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [ 19.984 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 25:54 min
[INFO] Finished at: 2015-07-14T21:30:39+09:00
[INFO] Final Memory: 81M/980M
[INFO] ------------------------------------------------------------------------
→光フレッツの回線でやく26分でダウンロードとビルドできた。
$ ./bin/spark-shell
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/07/14 21:35:07 INFO SecurityManager: Changing view acls to: jeff
15/07/14 21:35:07 INFO SecurityManager: Changing modify acls to: jeff
15/07/14 21:35:07 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(jeff); users with modify permissions: Set(jeff)
15/07/14 21:35:07 INFO HttpServer: Starting HTTP Server
15/07/14 21:35:07 INFO Utils: Successfully started service 'HTTP class server' on port 52125.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.4.0
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45)
Type in expressions to have them evaluated.
Type :help for more information.
…
scala> sc.parallelize(1 to 1000).count()
<console>:14: error: not found: value sc
sc.parallelize(1 to 1000).count()
^
なんかエラーが出た。時間がないので、今日は寝る。
時間があるとき、また原因探し。。。
ごめん、こんなところで止めた。(-_-)zzz
------------------------------
以下は5日間後に作業再開(遅すぎるけど、ごめん、休日で暇な時しかやってない)
↑前回の続きで、原因探し
エラーメッセージの一部は以下となっていた。(ほとんど以下エラーの繰り返し)
15/07/19 10:22:23 INFO Slf4jLogger: Slf4jLogger started
15/07/19 10:22:23 INFO Remoting: Starting remoting
15/07/19 10:22:26 ERROR NettyTransport: failed to bind to /203.189.105.208:0, shutting down Netty transport
15/07/19 10:22:26 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.
15/07/19 10:22:26 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/07/19 10:22:26 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/07/19 10:22:26 INFO Slf4jLogger: Slf4jLogger started
15/07/19 10:22:26 INFO Remoting: Starting remoting
★この原因としては、マシンのホスト名を***.comと設定してしまったせいのようです。
***.comのホスト名は実は別にあった。(うちの場合自分が所有しているものだった)
参考サイトは下記でした。
http://stackoverflow.com/questions/28162991/cant-run-spark-1-2-in-standalone-mode-on-mac
→see Answer2
マシンのホスト名を***.com → ***.localに変更して再起動する。
★今度は、下記エラーとなった。
Type :help for more information.
java.net.UnknownHostException: ***.local: ***.local: unknown error
at java.net.InetAddress.getLocalHost(InetAddress.java:1484)
ホスト名見つからないから、たぶん/etc/hostsの設定をしてなかったためだ。
以下のように設定を追加した。
192.168.xxx.yyy *** ***.local
すると、再度実行したら、エラーが消えて正常に起動できた。
$ ./bin/spark-shell
…
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.4.0
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45)
Type in expressions to have them evaluated.
Type :help for more information.
15/07/19 10:48:27 INFO SparkContext: Running Spark version 1.4.0
…
scala>
★以下はscalaと付属してきたREADME.md(テキストファイル)を使った動作確認。
(1)README.mdの行数カウント
(2)README.mdの最初行の文字列を出力
scala> val lines=sc.textFile("README.md")
15/07/19 10:55:34 INFO MemoryStore: ensureFreeSpace(110248) called with curMem=1900, maxMem=278019440
15/07/19 10:55:34 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 107.7 KB, free 265.0 MB)
15/07/19 10:55:34 INFO MemoryStore: ensureFreeSpace(10090) called with curMem=112148, maxMem=278019440
15/07/19 10:55:34 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 9.9 KB, free 265.0 MB)
15/07/19 10:55:34 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:34144 (size: 9.9 KB, free: 265.1 MB)
15/07/19 10:55:34 INFO SparkContext: Created broadcast 1 from textFile at <console>:21
lines: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[2] at textFile at <console>:21
scala> lines.count()
15/07/19 10:58:20 INFO FileInputFormat: Total input paths to process : 1
15/07/19 10:58:20 INFO SparkContext: Starting job: count at <console>:24
15/07/19 10:58:20 INFO DAGScheduler: Got job 1 (count at <console>:24) with 2 output partitions (allowLocal=false)
15/07/19 10:58:20 INFO DAGScheduler: Final stage: ResultStage 1(count at <console>:24)
15/07/19 10:58:20 INFO DAGScheduler: Parents of final stage: List()
15/07/19 10:58:20 INFO DAGScheduler: Missing parents: List()
15/07/19 10:58:20 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[2] at textFile at <console>:21), which has no missing parents
15/07/19 10:58:20 INFO MemoryStore: ensureFreeSpace(2960) called with curMem=122238, maxMem=278019440
15/07/19 10:58:20 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.9 KB, free 265.0 MB)
15/07/19 10:58:20 INFO MemoryStore: ensureFreeSpace(1745) called with curMem=125198, maxMem=278019440
15/07/19 10:58:20 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1745.0 B, free 265.0 MB)
15/07/19 10:58:20 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:34144 (size: 1745.0 B, free: 265.1 MB)
15/07/19 10:58:20 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:874
15/07/19 10:58:20 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (MapPartitionsRDD[2] at textFile at <console>:21)
15/07/19 10:58:20 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
15/07/19 10:58:20 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, PROCESS_LOCAL, 1410 bytes)
15/07/19 10:58:20 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, PROCESS_LOCAL, 1410 bytes)
15/07/19 10:58:20 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
15/07/19 10:58:20 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
15/07/19 10:58:20 INFO HadoopRDD: Input split: file:/home/jeff/spark-1.4.0/README.md:0+1812
15/07/19 10:58:20 INFO HadoopRDD: Input split: file:/home/jeff/spark-1.4.0/README.md:1812+1812
15/07/19 10:58:20 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
15/07/19 10:58:20 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
15/07/19 10:58:20 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
15/07/19 10:58:20 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
15/07/19 10:58:20 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
15/07/19 10:58:20 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 1830 bytes result sent to driver
15/07/19 10:58:20 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 1830 bytes result sent to driver
15/07/19 10:58:20 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 88 ms on localhost (1/2)
15/07/19 10:58:20 INFO DAGScheduler: ResultStage 1 (count at <console>:24) finished in 0.088 s
15/07/19 10:58:20 INFO DAGScheduler: Job 1 finished: count at <console>:24, took 0.136062 s
res1: Long = 98
scala> 15/07/19 10:58:20 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 89 ms on localhost (2/2)
15/07/19 10:58:20 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
scala> lines.first()
15/07/19 10:59:40 INFO SparkContext: Starting job: first at <console>:24
15/07/19 10:59:40 INFO DAGScheduler: Got job 2 (first at <console>:24) with 1 output partitions (allowLocal=true)
15/07/19 10:59:40 INFO DAGScheduler: Final stage: ResultStage 2(first at <console>:24)
15/07/19 10:59:40 INFO DAGScheduler: Parents of final stage: List()
15/07/19 10:59:40 INFO DAGScheduler: Missing parents: List()
15/07/19 10:59:40 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[2] at textFile at <console>:21), which has no missing parents
15/07/19 10:59:40 INFO MemoryStore: ensureFreeSpace(3128) called with curMem=126943, maxMem=278019440
15/07/19 10:59:40 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 3.1 KB, free 265.0 MB)
15/07/19 10:59:40 INFO MemoryStore: ensureFreeSpace(1805) called with curMem=130071, maxMem=278019440
15/07/19 10:59:40 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 1805.0 B, free 265.0 MB)
15/07/19 10:59:40 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:34144 (size: 1805.0 B, free: 265.1 MB)
15/07/19 10:59:40 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:874
15/07/19 10:59:40 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[2] at textFile at <console>:21)
15/07/19 10:59:40 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
15/07/19 10:59:40 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 4, localhost, PROCESS_LOCAL, 1410 bytes)
15/07/19 10:59:40 INFO Executor: Running task 0.0 in stage 2.0 (TID 4)
15/07/19 10:59:40 INFO HadoopRDD: Input split: file:/home/jeff/spark-1.4.0/README.md:0+1812
15/07/19 10:59:40 INFO Executor: Finished task 0.0 in stage 2.0 (TID 4). 1809 bytes result sent to driver
15/07/19 10:59:40 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 4) in 18 ms on localhost (1/1)
15/07/19 10:59:40 INFO DAGScheduler: ResultStage 2 (first at <console>:24) finished in 0.018 s
15/07/19 10:59:40 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
15/07/19 10:59:40 INFO DAGScheduler: Job 2 finished: first at <console>:24, took 0.029939 s
res2: String = # Apache Spark
★実際、上記のようにINFOメッセージがいっぱい出力されることを抑止することができる。
設定ファイル:conf/log4j.properties
設定方法:
$ cp conf/log4j.properties.template conf/log4j.properties
$ vi conf/log4j.propertie
…
#log4j.rootCategory=INFO, console
log4j.rootCategory=WARN, console
…
保存して終了
★上記の動作確認を再度実行すると、以下のように変わった。
[jeff@liandus spark-1.4.0]$ ./bin/spark-shell
15/07/19 11:10:27 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.4.0
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc.
SQL context available as sqlContext.
scala> val lines=sc.textFile("README.md")
lines: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[1] at textFile at <console>:21
scala> lines.count()
res0: Long = 98
scala> lines.first()
res1: String = # Apache Spark