In addition to Weibo, there is also WeChat
Please pay attention

WeChat public account
Shulou
2025-11-03 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly introduces "Spark examples source code analysis". In daily operation, I believe many people have doubts about Spark examples source code analysis. The editor consulted all kinds of data and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "Spark examples source code analysis". Next, please follow the editor to study!
SparkSQLExample
Private def runBasicDataFrameExample (spark: SparkSession): Unit = {
Val df = spark.read.json ("hdfs://master:9000/sparkfiles/people.json")
Df.show ()
Import spark.implicits._
Df.printSchema ()
Df.select (name) .show ()
Df.select ($"name", $"age" + 1) .show ()
Df.filter ($"age" > 21). Show ()
Df.groupBy ("age"). Count (). Show ()
Df.createOrReplaceTempView ("people")
Val sqlDF = spark.sql ("SELECT * FROM people")
SqlDF.show ()
}
The contents of the file for people.json are as follows:
{"name": "Michael"}
{"name": "Andy", "age": 30}
{"name": "Justin", "age": 19}
The first step is to read the file and build a DataFrame, while DataFrame is defined in package object sql, which is essentially an alias for data [Row].
After that, let's take a look at df.show (). Its output looks like this (no trouble at all):
Df.printSchema () outputs the structure information of the json:
Df.select ("name") .show (), for the select method, returns a DataFrame that contains only a column of name.
Df.select ($"name", $"age" + 1) .show (), returns a DataFrame with the age of all + 1.
Df.groupBy ("age"). Count (). Show (), we need to elaborate on this line of code. First, the return value of groupBy is a RelationalGroupedDataset, A set of methods for aggregations ona DataFrame, created by Dataset.groupBy. Aggregate functions such as min,max,count are provided. The structure of count is another DataFrame.
The last paragraph is interesting, you can temporarily create a view, and then use sql to query.
Df.createOrReplaceTempView ("people")
Val sqlDF = spark.sql ("SELECT * FROM people")
SqlDF.show ()
At this point, the study of "Spark examples source code analysis" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

The market share of Chrome browser on the desktop has exceeded 70%, and users are complaining about
The world's first 2nm mobile chip: Samsung Exynos 2600 is ready for mass production.According to a r
A US federal judge has ruled that Google can keep its Chrome browser, but it will be prohibited from
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope





About us Contact us Product review car news thenatureplanet
More Form oMedia: AutoTimes. Bestcoffee. SL News. Jarebook. Coffee Hunters. Sundaily. Modezone. NNB. Coffee. Game News. FrontStreet. GGAMEN
© 2024 shulou.com SLNews company. All rights reserved.