python - 从表中垂直读取抓取的数据,而不是水平读取 Python

标签 python web-scraping beautifulsoup

我正在使用 python 和 beautifulsoup 编写一个网络抓取工具,以从网页的表中获取数据。 该表的链接位于代码 (url01)

我想知道是否有可能从表格中垂直读取数据而不是水平读取数据

这是我的代码

import requests
import json
from bs4 import BeautifulSoup
from itertools import islice

#URL declaration
url01 = 'https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html'

#BeautifulSoup4
response = requests.get(url01, timeout=5)
content = BeautifulSoup(response.content, 'html.parser')

#deletes all the empty tags
empty_tags = content.find_all(lambda tag: not tag.contents)
[empty_tag.extract() for empty_tag in empty_tags]

#Find all td in class body in div table table-hover
data = content.find_all('td')
#print (data)

numbers = [d.text.encode('utf-8') for d in data]
#print (numbers)

#create string
str1 = ''.join(str(e) for e in numbers)
#print (str1)

str_splt = str1.split('b')
#print (str_splt)

#Split list into several sublists
length_to_split = [45, 45, 45, 110, 110, 110, 188, 188, 188, 253, 253, 253, 383, 383, 383]
Input = iter(str_splt)
Output = [list(islice(Input, elem))
          for elem in length_to_split]
print (Output[3])


#Python dictionary
dataDict = {
    '2015 Lohn': None,
    '2015 Sonstiges': None,
    '2015 Insgesamt': None,
    'Insgesamt': None
    }

dataDict['Insgesamt'] = str_splt
#print (dataDict)

#save dictionary in json file
with open('indexData.json', 'w') as f:
    json.dump(dataDict, f)

当我执行程序并想要打印出我的第一个子列表时,这些就是结果。它具有所需的长度 (45),但它是从表中水平读取的,这使得它毫无用处

['', "'108,6'", "'110,8'", "'109,8'", "'122,1'", "'114,3'", "'118,0'", "'140,6'", "'131,9'", "'136,0'", "'162,0'", "'166,3'", "'165,2'", "'261,9'", "'189,8'", "'222,5'", "'108,6'", "'111,4'", "'110,1'", "'122,1'", "'115,0'", "'118,4'", "'140,6'", "'132,6'", "'136,4'", "'162,0'", "'167,2'", "'165,7'", "'261,9'", "'190,8'", "'223,1'", "'105,2'", "'111,9'", "'108,9'", "'118,2'", "'115,5'", "'117,1'", "'136,2'", "'133,2'", "'134,9'", "'157,0'", "'168,0'", "'163,9'", "'253,7'", "'191,7'"]

最佳答案

一种可能的解决方案,无需pandas。函数 get_column() 将列作为元组返回,索引从 0 开始:

import requests
import json
from bs4 import BeautifulSoup
from itertools import islice

#URL declaration
url01 = 'https://www.statistik.at/web_de/statistiken/wirtschaft/preise/baukostenindex/030979.html'

#BeautifulSoup4
response = requests.get(url01, timeout=5)
content = BeautifulSoup(response.content, 'html.parser')

rows = []
for tr in content.select('tr')[:-1]: # [:-1] because we don't want the last info row
    data = [td.get_text(strip=True) for td in tr.select('td')]
    if data:
        rows.append(data)

def get_column(rows, col_num):
    return [*zip(*rows)][col_num]

print('2015 Lohn:')
print(get_column(rows, 0))

print('2015 Sonstiges:')
print(get_column(rows, 1))

print('2015 Insgesamt:')
print(get_column(rows, 2))

打印:

2015 Lohn:
('108,6', '108,6', '105,2', '105,2', '105,2', '105,2', '104,4', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '105,2', '102,9', '102,9', '102,9', '102,9', '102,6', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '102,9', '101,9', '101,9', '101,9', '101,9', '101,5', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '101,9', '100,8', '100,8', '100,8', '100,8', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '')
2015 Sonstiges:
('110,8', '111,4', '111,9', '111,0', '111,6', '112,4', '112,6', '113,1', '114,6', '114,8', '114,3', '113,8', '113,0', '113,3', '112,7', '111,4', '110,5', '109,9', '110,0', '106,3', '108,9', '108,9', '108,3', '107,3', '105,7', '105,0', '105,2', '106,1', '106,5', '105,1', '104,3', '104,1', '97,7', '101,6', '99,6', '99,1', '98,5', '98,5', '98,3', '98,9', '98,5', '96,2', '94,1', '93,9', '94,9', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '')
2015 Insgesamt:
('109,8', '110,1', '108,9', '108,4', '108,7', '109,1', '108,9', '109,5', '110,4', '110,4', '110,2', '109,9', '109,5', '109,6', '109,3', '107,6', '107,1', '106,8', '106,8', '104,6', '106,2', '106,2', '105,9', '105,4', '104,5', '104,1', '104,2', '104,7', '104,4', '103,6', '103,2', '103,1', '99,4', '101,7', '100,6', '100,4', '100,0', '100,0', '99,9', '100,2', '100,0', '98,2', '97,1', '97,0', '97,6', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '', '')

关于python - 从表中垂直读取抓取的数据,而不是水平读取 Python,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/57143884/

相关文章:

python - ParDo 中的分区和多个输出之间的区别?

python - 如何从不使用 POST 的网站抓取信息

python - 如何使用 re.compile 和 re.findall 删除括号内的文本?

python - 1 个类继承 2 个不同的元类(abcmeta 和用户定义的元)

python - 如何在 Python 2 中加载 Python 3 Pickled SKlearn 模型

python - 无法在 Scrapy 蜘蛛中使用多个代理

Python段错误,使用pyqt4

python - 使用 Python 和 BeautifulSoup 抓取 Amazon 数据时出错

Python ImageGrab 和 OpenCV

python - Selenium:WAITING空元素(跨度)包含任何文本