In addition to Weibo, there is also WeChat
Please pay attention

WeChat public account
Shulou
2025-11-09 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article introduces how Python crawls the commodity data of Amoy data platform, the content is very detailed, interested friends can refer to it, I hope it can be helpful to you.
Preface
Recently found a good data site, called "Amoy data". The data in it are all Taobao merchant data, including store name, category, list price, average transaction price, sales volume, sales amount and so on.
I didn't know about this website until a classmate told me. In this case, I started to climb.
Project goal
Crawl Taobao wig professional data, wig is my random choice at that time, and then want to choose something else, there will be a charge
Victim address
Https://www.taosj.com/industry/index.html#/data/hotitems/?cid=50023283&brand=&type=&pcid= environment
Python3.6
Pycharm
Crawler code
Import the required tools
Import requestsimport csv
To analyze the web page, F12 first open the developer tool, copy the data you need, and find the tag where the data is located.
Find the required parameters in URL and headers
Url = 'https://www.taosj.com/data/industry/hotitems/list?cid=50023283&brand=&type=ALL&date=1596211200000&pageNo=1&pageSize=10&orderType=desc&orderField='.format(page)headers = {' Host':'www.taosj.com', 'Referer':' https://www.taosj.com/industry/index.html',' User-Agent':'Mozilla/5.0 (Windows NT 10.0 WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',} response = requests.get (url=url, headers=headers) html_data = response.json ()
Extract related data from json data
Lis = html_data ['data'] [' list'] for li in lis: tb_url = 'https://detail.tmall.com/item.htm?id={}'.format(li['id']) dit = {' title': li ['title'],' shop name': li ['shop'],' category': li ['nextCatName'],' list price': li ['oriPrice'] Average transaction price: li ['price'],' sales volume': li ['offer30'],' sales amount': li ['price30'],' Taobao address': tb_url,}
Save data
F = open ('Amoy data .csv', mode='a', encoding='utf-8-sig', newline='') csv_writer = csv.DictWriter (f, fieldnames= ['title', 'brand', 'store name', 'category', 'list price', 'average transaction price', 'sales volume', 'sales amount', 'Taobao address']) csv_writer.writeheader () print (dit)
Complete code
Import requestsimport csvf = open ('Amoy data .csv', mode='a', encoding='utf-8-sig', newline='') csv_writer = csv.DictWriter (f, fieldnames= ['title', 'brand', 'store name', 'category', 'list price', 'average transaction price', 'sales volume', 'sales amount', 'Taobao address']) csv_writer.writeheader () for page in range (1 51): url = 'https://www.taosj.com/data/industry/hotitems/list?cid=50023282&brand=&type=ALL&date=1596211200000&pageNo={}&pageSize=10&orderType=desc&orderField=amount&searchKey='.format(page) "copy the parameters in requests headers in the developer's tool to add cookie" headers = {' Host': 'www.taosj.com',' Referer': 'https://www.taosj.com/industry/index.html', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0 WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36',} response = requests.get (url=url) Headers=headers) html_data = response.json () lis = html_data ['data'] [' list'] for li in lis: tb_url = 'https://detail.tmall.com/item.htm?id={}'.format(li['id']) dit = {' title': li ['title'],' Brand': li ['brand'] 'Store name': li ['shop'],' category': li ['nextCatName'],' list price': li ['oriPrice'],' average transaction price': li ['price'],' sales volume': li ['offer30'],' sales amount': li ['price30'],' Taobao address': tb_url } csv_writer.writerow (dit) print (dit) about how Python crawls and scrapes and Amoy data platform commodity data is shared here. I hope the above content can be of some help to you and learn more knowledge. If you think the article is good, you can share it for more people to see.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

The market share of Chrome browser on the desktop has exceeded 70%, and users are complaining about
The world's first 2nm mobile chip: Samsung Exynos 2600 is ready for mass production.According to a r
A US federal judge has ruled that Google can keep its Chrome browser, but it will be prohibited from
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope





About us Contact us Product review car news thenatureplanet
More Form oMedia: AutoTimes. Bestcoffee. SL News. Jarebook. Coffee Hunters. Sundaily. Modezone. NNB. Coffee. Game News. FrontStreet. GGAMEN
© 2024 shulou.com SLNews company. All rights reserved.