假设我正在抓取一个url
http://www.engineering.careers360.com/colleges/list-of-engineering-colleges-in-India?sort_filter=alpha
它不包含任何包含我想要抓取的数据的页面。所以我怎么才能刮下所有下一页的数据。我使用python 3.5.1和美丽的汤。注意:我不能使用scrapy和lxml,因为它给我一些安装错误。
通过提取“转到最后一页”元素的page
参数来确定最后一页。并通过请求在每个页面上循环维护Web抓取会话。会话()
:
import re
import requests
from bs4 import BeautifulSoup
with requests.Session() as session:
# extract the last page
response = session.get("http://www.engineering.careers360.com/colleges/list-of-engineering-colleges-in-India?sort_filter=alpha")
soup = BeautifulSoup(response.content, "html.parser")
last_page = int(re.search("page=(\d+)", soup.select_one("li.pager-last").a["href"]).group(1))
# loop over every page
for page in range(last_page):
response = session.get("http://www.engineering.careers360.com/colleges/list-of-engineering-colleges-in-India?sort_filter=alpha&page=%f" % page)
soup = BeautifulSoup(response.content, "html.parser")
# print the title of every search result
for result in soup.select("li.search-result"):
title = result.find("div", class_="title").get_text(strip=True)
print(title)
印刷品:
A C S College of Engineering, Bangalore
A1 Global Institute of Engineering and Technology, Prakasam
AAA College of Engineering and Technology, Thiruthangal
...