In addition to Weibo, there is also WeChat
Please pay attention

WeChat public account
Shulou
2025-11-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
In this issue, the editor will bring you about Python how to carry out free material crawling. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.
As we all know, collecting massive design materials is very easy to use, but it is too expensive. Today, we will use Python- crawlers to crawl these materials and save them locally!
To crawl the content of a website, we need to start from the following aspects:
1-how to grab the link to the next page of the site?
2-whether the target resource is static or dynamic (video, picture, etc.)
3-the data structure format of the website
The source code is as follows import requestsfrom lxml import etreeimport threading class Spider (object): def _ _ init__ (self): self.headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0) WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36 "} self.offset = 1 def start_work (self, url): print (" crawling page d. "% self.offset) self.offset + = 1 response = requests.get (url=url) Headers=self.headers) html = response.content.decode () html = etree.HTML (html) video_src = html.xpath ('/ / div [@ class= "video-play"] / video/@src') video_title = html.xpath ('/ / span [@ class= "video-title"] / text ()') next_page = "http:" + html.xpath ('/ / a [@ class= "next"] / @ href') [0] # crawling completed. If next_page = "http:": return self.write_file (video_src, video_title) self.start_work (next_page) def write_file (self, video_src, video_title): for src, title in zip (video_src, video_title): response = requests.get ("http:" + src) Headers=self.headers) file_name = title + ".mp4" file_name = ".join (file_name.split (" / ")) print (" crawling% s "% file_name) with open ('E://python//demo//mp4//'+file_name) "wb") as f: f.write (response.content) if _ _ name__ = "_ _ main__": spider = Spider () for i in range (0Magne3): # spider.start_work (url= "https://ibaotu.com/shipin/7-0-0-0-"+ str (I) +"-1.html ") t = threading.Thread (target=spider.start_work Args= ("https://ibaotu.com/shipin/7-0-0-0-"+ str (I) +"-1.html ",) t.start ()
Effect display
The above is the editor for you to share the Python how to carry out the package map network free material crawling, if you happen to have similar doubts, you might as well refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

The market share of Chrome browser on the desktop has exceeded 70%, and users are complaining about
The world's first 2nm mobile chip: Samsung Exynos 2600 is ready for mass production.According to a r
A US federal judge has ruled that Google can keep its Chrome browser, but it will be prohibited from
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope





About us Contact us Product review car news thenatureplanet
More Form oMedia: AutoTimes. Bestcoffee. SL News. Jarebook. Coffee Hunters. Sundaily. Modezone. NNB. Coffee. Game News. FrontStreet. GGAMEN
© 2024 shulou.com SLNews company. All rights reserved.