scrapy 爬虫,始终获取不到数据,如何解决呢?-灵析社区

抠香糖

真是气人! ![image.png](https://wmlx-new-image.oss-cn-shanghai.aliyuncs.com/images/20241027/4f69fc2743b2f192428c2cb53d91531b.png) 你这边代码有问题,然后就去exception那块了,那边没有yield item就导致中断了。其实你没必要做那个判断。 修改后就可以了 ![image.png](https://wmlx-new-image.oss-cn-shanghai.aliyuncs.com/images/20241027/81e0af8d4d949e0ae738a95096ab5b46.png) * * * scrapy跑代码有些麻烦,用下面的吧。 import requests as r from lxml.etree import HTML def main(): resp = r.get('https://tianjin.cncn.com/jingdian/') content = resp.content.decode('gb2312') # html = HTML(content) nodes = html.xpath('//div[@class="city_spots_list"]/ul/li') for n in nodes: title = n.xpath('./a/div[@class="title"]//b//text()') print(title) x = 3 pass if __name__ == '__main__': main() * * * ['天津之眼摩天轮'] ['五大道'] ['天津古文化街'] ['海河意式风情区'] ['瓷房子'] ['天津欢乐谷'] ['动物园'] ['天津自然博物馆'] ['西开教堂'] ['海昌极地海洋世界'] ['霍元甲故居'] ['天津航母主题公园'] ['大沽口炮台'] ['静园'] ['世纪钟'] ['塘沽滨海世纪广场'] ['南开大学'] ['水上公园'] ['天津大学']

阅读量:1

点赞量:0

问AI