python - 如何在pyspark中打印具有特征名称的随机森林的决策路径?

标签 python apache-spark pyspark

如何修改代码以使用特征名称而不只是数字来打印决策路径。

import pandas as pd
import pyspark.sql.functions as F
from pyspark.ml import Pipeline, Transformer
from pyspark.sql import DataFrame
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import VectorAssembler

data = pd.DataFrame({
    'ball': [0, 1, 2, 3],
    'keep': [4, 5, 6, 7],
    'hall': [8, 9, 10, 11],
    'fall': [12, 13, 14, 15],
    'mall': [16, 17, 18, 10],
    'label': [21, 31, 41, 51]
})

df = spark.createDataFrame(data)

assembler = VectorAssembler(
    inputCols=['ball', 'keep', 'hall', 'fall'], outputCol='features')
dtc = DecisionTreeClassifier(featuresCol='features', labelCol='label')

pipeline = Pipeline(stages=[assembler, dtc]).fit(df)
transformed_pipeline = pipeline.transform(df)

ml_pipeline = pipeline.stages[1]
print(ml_pipeline.toDebugString)

输出:

DecisionTreeClassificationModel (uid=DecisionTreeClassifier_48b3a34f6fb1f1338624) of depth 3 with 7 nodes   If (feature 0 <= 0.5)    Predict: 21.0   Else (feature 0 >
0.5)    If (feature 0 <= 1.5)
    Predict: 31.0    Else (feature 0 > 1.5)
    If (feature 0 <= 2.5)
     Predict: 41.0
    Else (feature 0 > 2.5)
     Predict: 51.0

最佳答案

一种选择是手动替换字符串中的文本。我们可以通过将作为 inputCols 传递的值存储在列表 input_cols 中,然后每次将模式 feature i 替换为 <列表的第 i 个元素 input_cols

import pyspark.sql.functions as F
from pyspark.ml import Pipeline, Transformer
from pyspark.sql import DataFrame
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import VectorAssembler
import pandas as pd

data = pd.DataFrame({
    'ball': [0, 1, 2, 3],
    'keep': [4, 5, 6, 7],
    'hall': [8, 9, 10, 11],
    'fall': [12, 13, 14, 15],
    'mall': [16, 17, 18, 10],
    'label': [21, 31, 41, 51]
})

df = spark.createDataFrame(data)

input_cols = ['ball', 'keep', 'hall', 'fall']
assembler = VectorAssembler(
    inputCols=input_cols, outputCol='features')
dtc = DecisionTreeClassifier(featuresCol='features', labelCol='label')

pipeline = Pipeline(stages=[assembler, dtc]).fit(df)
transformed_pipeline = pipeline.transform(df)

ml_pipeline = pipeline.stages[1]

string = ml_pipeline.toDebugString
for i, feat in enumerate(input_cols):
    string = string.replace('feature ' + str(i), feat)
print(string)

输出:

DecisionTreeClassificationModel (uid=DecisionTreeClassifier_4eb084167f2ed4b671e8) of depth 3 with 7 nodes
  If (ball <= 0.0)
   Predict: 21.0
  Else (ball > 0.0)
   If (ball <= 1.0)
    Predict: 31.0
   Else (ball > 1.0)
    If (ball <= 2.0)
     Predict: 41.0
    Else (ball > 2.0)
     Predict: 51.0

希望这有帮助!

关于python - 如何在pyspark中打印具有特征名称的随机森林的决策路径?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51634947/

相关文章:

Python - 初始化父类

python - 获取装饰函数的名称?

python - 如何在 peewee 元类中使用默认排序

python - RDD 只有第一列值 : Hbase, PySpark

python - 从元组更改为列表,反之亦然

apache-spark - 在运行这个程序之前你需要构建spark

apache-spark - 类型错误 : Column is not iterable - How to iterate over ArrayType()?

oracle - 使用 Apache Spark 1.4.0 写入 Oracle 数据库

apache-spark - 如何从 DataFrame apache spark 中找到最大值 Alphabet?

apache-spark - 如何在pyspark的LogisticRegressionWithLBFGS中打印预测概率