本文共 7786 字,大约阅读时间需要 25 分钟。
今天不谈Spark中什么复杂的技术实现,只稍为聊聊如何进行代码跟读。众所周知,Spark使用scala进行开发,由于scala有众多的语法糖,很多时候代码跟着跟着就觉着线索跟丢掉了,另外Spark基于Akka来进行消息交互,那如何知道谁是接收方呢?
代码跟读的时候,经常会借助于日志,针对日志中输出的每一句,我们都很想知道它们的调用者是谁。但有时苦于对spark系统的了解程度不深,或者对scala认识不够,一时半会之内无法找到答案,那么有没有什么简便的办法呢?
我的办法就是在日志出现的地方加入下面一句话
new Throwable().printStackTrace()
现在举一个实际的例子来说明问题。
比如我们在启动spark-shell之后,输入一句非常简单的sc.textFile("README.md"),会输出下述的log
14/07/05 19:53:27 INFO MemoryStore: ensureFreeSpace(32816) called with curMem=0, maxMem=30891048914/07/05 19:53:27 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 32.0 KB, free 294.6 MB)14/07/05 19:53:27 DEBUG BlockManager: Put block broadcast_0 locally took 78 ms14/07/05 19:53:27 DEBUG BlockManager: Putting block broadcast_0 without replication took 79 msres0: org.apache.spark.rdd.RDD[String] = README.md MappedRDD[1] at textFile at :13
那我很想知道是第二句日志所在的tryToPut函数是被谁调用的该怎么办?
办法就是打开MemoryStore.scala,找到下述语句
logInfo("Block %s stored as %s in memory (estimated size %s, free %s)".format( blockId, valuesOrBytes, Utils.bytesToString(size), Utils.bytesToString(freeMemory)))
在这句话之上,添加如下语句
new Throwable().printStackTrace()
然后,重新进行源码编译
sbt/sbt assembly
再次打开spark-shell,执行sc.textFile("README.md"),就可以得到如下输出,从中可以清楚知道tryToPut的调用者是谁
14/07/05 19:53:27 INFO MemoryStore: ensureFreeSpace(32816) called with curMem=0, maxMem=30891048914/07/05 19:53:27 WARN MemoryStore: just show the calltrace by entering some modified codejava.lang.Throwable at org.apache.spark.storage.MemoryStore.tryToPut(MemoryStore.scala:182) at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:76) at org.apache.spark.storage.MemoryStore.putValues(MemoryStore.scala:92) at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:699) at org.apache.spark.storage.BlockManager.put(BlockManager.scala:570) at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:821) at org.apache.spark.broadcast.HttpBroadcast.(HttpBroadcast.scala:52) at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:35) at org.apache.spark.broadcast.HttpBroadcastFactory.newBroadcast(HttpBroadcastFactory.scala:29) at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62) at org.apache.spark.SparkContext.broadcast(SparkContext.scala:787) at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:556) at org.apache.spark.SparkContext.textFile(SparkContext.scala:468) at $line5.$read$$iwC$$iwC$$iwC$$iwC.(:13) at $line5.$read$$iwC$$iwC$$iwC.(:18) at $line5.$read$$iwC$$iwC.(:20) at $line5.$read$$iwC.(:22) at $line5.$read.(:24) at $line5.$read$.(:28) at $line5.$read$.() at $line5.$eval$.(:7) at $line5.$eval$.() at $line5.$eval.$print() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:788) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1056) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:614) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:645) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:609) at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:796) at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:841) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:753) at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:601) at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:608) at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:611) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:936) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884) at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:884) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:884) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:982) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:303) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)14/07/05 19:53:27 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 32.0 KB, free 294.6 MB)14/07/05 19:53:27 DEBUG BlockManager: Put block broadcast_0 locally took 78 ms14/07/05 19:53:27 DEBUG BlockManager: Putting block broadcast_0 without replication took 79 msres0: org.apache.spark.rdd.RDD[String] = README.md MappedRDD[1] at textFile at :13
对代码作了修改之后,如果并不想提交代码,那该如何将最新的内容同步到本地呢?
git reset --hardgit pull origin master
追踪消息的接收者是谁,相对来说比较容易,只要使用好grep就可以了,当然前提是要对actor model有一点点了解。
还是举个实例吧,我们知道CoarseGrainedSchedulerBackend会发送LaunchTask消息出来,那么谁是接收方呢?只需要执行以下脚本即可。
grep LaunchTask -r core/src/main
从如下的输出中,可以清楚看出CoarseGrainedExecutorBackend是LaunchTask的接收方,接收到该函数之后的业务处理,只需要去看看接收方的receive函数即可。
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala: case LaunchTask(data) =>core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala: logError("Received LaunchTask command but executor was null")core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedClusterMessage.scala: case class LaunchTask(data: SerializableBuffer) extends CoarseGrainedClusterMessagecore/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala: executorActor(task.executorId) ! LaunchTask(new SerializableBuffer(serializedTask))
今天的内容相对简单,没有技术含量,自己做个记述,免得时间久了,不记得。
上篇博文讲述了如何通过修改源码来查看调用堆栈,尽管也很实用,但每修改一次都需要编译,花费的时间不少,效率不高,而且属于侵入性的修改,不优雅。本篇讲述如何使用intellij idea来跟踪调试spark源码。
本文假设开发环境是在Linux平台,并且已经安装下列软件,我个人使用的是arch linux。
为idea安装scala插件,具体步骤如下:
2. 选择右侧的Install Jetbrains Plugin,在弹出窗口的左侧输入scala,然后点击安装,如下图所示:
3. scala插件安装结束,需要重启idea生效
由于idea 13已经原生支持sbt,所以无须为idea安装sbt插件。
下载源码,假设使用git同步最新的源码:
git clone https://github.com/apache/spark.git
导入Spark源码
1. 选择File->Import Project, 在弹出的窗口中指定spark源码目录
2. 选择项目类型为sbt project,然后点击next
3. 在新弹出的窗口中先选中"Use auto-import",然后点击Finish
导入设置完成,进入漫长的等待,idea会对导入的源码进行编译,同时会生成文件索引。
如果在提示栏出现如下的提示内容"is waiting for .sbt.ivy.lock",说明该lock文件无法创建,需要手工删除,具体操作如下:
cd $HOME/.ivy2rm *.lock
手工删除掉lock之后,重启idea,重启后会继续上次没有完成的sbt过程。
使用idea来编译spark源码,中间会有多次出错,问题的根源是sbt/sbt gen-idea的时候并没有很好的解决依赖关系。
解决办法如下,
1. 选择File->Project Structures
2. 在右侧dependencies中添加新的module
选择spark-core
其它模块如streaming-twitter, streaming-kafka, streaming-flume, streaming-mqtt出错的情况解决方案与此类似。
注意Example编译报错时的处理稍有不同,在指定Dependencies的时候,不是选择Library而是选择Module dependency,在弹出的窗口中选择sql.
有关编译出错问题的解决可以看一下这个链接,
1. 选择Run->Edit configurations
2. 添加Application,注意右侧窗口中配置项内容的填写,分别为Main class, vm options, working directory, use classpath of module
-Dspark.master=local 指定Spark的运行模式,可根据需要作适当修改。
3. 至此,在Run菜单中可以发现有"Run LogQuery"一项存在,尝试运行,保证编译成功。
4. 断点设置,在源文件的左侧双击即可打上断点标记,然后点击Run->"Debug LogQuery", 大功告成,如下图所示,可以查看变量和调用堆栈了。
转载地址:http://bjczl.baihongyu.com/