我是 pyspark 的新手。我想对文本文件执行一些机器学习。
from pyspark import Row
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from pyspark import SparkConf
sc = SparkContext
spark = SparkSession.builder.appName("ML").getOrCreate()
train_data = spark.read.text("20ng-train-all-terms.txt")
td= train_data.rdd #transformer df to rdd
tr_data= td.map(lambda line: line.split()).map(lambda words: Row(label=words[0],words=words[1:]))
from pyspark.ml.feature import CountVectorizer
vectorizer = CountVectorizer(inputCol ="words", outputCol="bag_of_words")
vectorizer_transformer = vectorizer.fit(td)
对于我的最后一个命令,我收到错误 “AttributeError:'RDD'对象没有属性'_jdf'
谁能帮帮我。 谢谢
最佳答案
您不应该将rdd
与CountVectorizer
一起使用。相反,您应该尝试在 dataframe
本身中形成单词数组,如
train_data = spark.read.text("20ng-train-all-terms.txt")
from pyspark.sql import functions as F
td= train_data.select(F.split("value", " ").alias("words")).select(F.col("words")[0].alias("label"), F.col("words"))
from pyspark.ml.feature import CountVectorizer
vectorizer = CountVectorizer(inputCol="words", outputCol="bag_of_words")
vectorizer_transformer = vectorizer.fit(td)
然后它应该可以工作,以便您可以调用 transform
函数作为
vectorizer_transformer.transform(td).show(truncate=False)
现在,如果您想坚持转换为 rdd 样式的旧样式,那么您必须修改某些代码行。以下是您修改后的完整代码(工作)
from pyspark import Row
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from pyspark import SparkConf
sc = SparkContext
spark = SparkSession.builder.appName("ML").getOrCreate()
train_data = spark.read.text("20ng-train-all-terms.txt")
td= train_data.rdd #transformer df to rdd
tr_data= td.map(lambda line: line[0].split(" ")).map(lambda words: Row(label=words[0], words=words[1:])).toDF()
from pyspark.ml.feature import CountVectorizer
vectorizer = CountVectorizer(inputCol="words", outputCol="bag_of_words")
vectorizer_transformer = vectorizer.fit(tr_data)
但我建议您坚持使用dataframe
方式。
关于python-3.x - “RDD”对象没有属性 '_jdf' pyspark RDD,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48990291/