扫二维码与项目经理沟通
我们在微信上24小时期待你的声音
解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流
这篇文章跟大家分析一下“Spark SQL的代码示例分析”。内容详细易懂,对“Spark SQL的代码示例分析”感兴趣的朋友可以跟着小编的思路慢慢深入来阅读一下,希望阅读后能够对大家有所帮助。下面跟着小编一起深入学习“Spark SQL的代码示例分析”的知识吧。
莱西网站建设公司创新互联,莱西网站设计制作,有大型网站制作公司丰富经验。已为莱西数千家提供企业网站建设服务。企业网站搭建\外贸网站建设要多少钱,请找那个售后服务好的莱西做网站的公司定做!
参考官网Spark SQL的例子,自己写了一个脚本:
val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String) // Create an RDD of Person objects and register it as a table. val user = sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).map(u => UserLog(u(0), u(1), u(2), u(3), u(4), u(5))) user.registerTempTable("user_log") // SQL statements can be run by using the sql methods provided by sqlContext. val allusers = sqlContext.sql("SELECT * FROM user_log") // The results of SQL queries are SchemaRDDs and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. allusers.map(t => "UserId:" + t(0)).collect().foreach(println)
结果执行出错:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 50.0 failed 1 times, most recent failure: Lost task 1.0 in stage 50.0 (TID 73, localhost): java.lang.ArrayIndexOutOfBoundsException: 5 at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(:30) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply( :30) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1319) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) at org.apache.spark.scheduler.Task.run(Task.scala:56) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
从日志可以看出,是数组越界了。
用命令
sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).foreach(x => println(x.size))
发现有一行记录split出来的大小是“5”
6 6 6 6 6 6 6 6 6 6 15/05/21 20:47:37 INFO Executor: Finished task 0.0 in stage 2.0 (TID 4). 1774 bytes result sent to driver 6 6 6 6 6 6 5 6 15/05/21 20:47:37 INFO Executor: Finished task 1.0 in stage 2.0 (TID 5). 1774 bytes result sent to driver
原因是这行记录有空值“44671799^2015-03-27 20:56:05^2^117.93.193.238^0^^”
网上找到了解决办法——使用split(str,int)函数。修改后代码:
val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.createSchemaRDD case class UserLog(userid: String, time1: String, platform: String, ip: String, openplatform: String, appid: String) // Create an RDD of Person objects and register it as a table. val user = sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^", -1)).map(u => UserLog(u(0), u(1), u(2), u(3), u(4), u(5))) user.registerTempTable("user_log") // SQL statements can be run by using the sql methods provided by sqlContext. val allusers = sqlContext.sql("SELECT * FROM user_log") // The results of SQL queries are SchemaRDDs and support all the normal RDD operations. // The columns of a row in the result can be accessed by ordinal. allusers.map(t => "UserId:" + t(0)).collect().foreach(println)
关于Spark SQL的代码示例分析就分享到这里啦,希望上述内容能够让大家有所提升。如果想要学习更多知识,请大家多多留意小编的更新。谢谢大家关注一下创新互联网站!
我们在微信上24小时期待你的声音
解答本文疑问/技术咨询/运营咨询/技术建议/互联网交流