假设我有一个像这样的数据框
df = spark.createDataFrame(
[
('Test1 This is a test Test2','This is a test'),
('That is','That')
],
['text','name'])
+--------------------------+--------------+
|text |name |
+--------------------------+--------------+
|Test1 This is a test Test2|This is a test|
|That is |That |
+--------------------------+--------------+
如果我应用 df.withColumn("new",F.expr("regexp_replace(text,name,'')")).show(truncate=False)
它工作正常并且结果在
+--------------------------+--------------+------------+
|text |name |new |
+--------------------------+--------------+------------+
|Test1 This is a test Test2|This is a test|Test1 Test2|
|That is |That | is |
+--------------------------+--------------+------------+
假设我有以下数据框
+-----------------------------+-----------------+
|text |name |
+-----------------------------+-----------------+
|Test1 This is a test(+1 Test2|This is a test(+1|
|That is |That |
+-----------------------------+-----------------+
如果我从上面应用命令,我会收到以下错误消息:
java.util.regex.PatternSyntaxException: Dangling meta character '+'
我该怎么做才能使此异常不会以最“pyspark”的方式发生并保持文本中的值不变?
谢谢
最佳答案
在 spark 中使用 replace
函数代替 regexp_replace
。
replace(str, search[, replace]) - Replaces all occurrences of search with replace.
示例:
df.show(10,False)
#+-----------------------------+-----------------+
#|text |name |
#+-----------------------------+-----------------+
#|Test1 This is a test(+1 Test2|This is a test(+1|
#|That is |That |
#+-----------------------------+-----------------+
df.withColumn("new",expr("replace(text,name,'')")).show(10,False)
#+-----------------------------+-----------------+------------+
#|text |name |new |
#+-----------------------------+-----------------+------------+
#|Test1 This is a test(+1 Test2|This is a test(+1|Test1 Test2|
#|That is |That | is |
#+-----------------------------+-----------------+------------+
关于pyspark:删除作为另一列值的子字符串,并包含给定列值中的正则表达式字符,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/65022464/