我正在尝试使用 R 中的随机梯度下降来构建我自己的逻辑回归函数,但是我现在所拥有的使权重无限制地增长,因此永远不会停止:
# Logistic regression
# Takes training example vector, output vector, learn rate scalar, and convergence delta limit scalar
my_logr <- function(training_examples,training_outputs,learn_rate,conv_lim) {
# Initialize gradient vector
gradient <- as.vector(rep(0,NCOL(training_examples)))
# Difference between weights
del_weights <- as.matrix(1)
# Weights
weights <- as.matrix(runif(NCOL(training_examples)))
weights_old <- as.matrix(rep(0,NCOL(training_examples)))
# Compute gradient
while(norm(del_weights) > conv_lim) {
for (k in 1:NROW(training_examples)) {
gradient <- gradient + 1/NROW(training_examples)*
((t(training_outputs[k]*training_examples[k,]
/(1+exp(training_outputs[k]*t(weights)%*%as.numeric(training_examples[k,]))))))
}
# Update weights
weights <- weights_old - learn_rate*gradient
del_weights <- as.matrix(weights_old - weights)
weights_old <- weights
print(weights)
}
return(weights)
}
可以使用以下代码测试该功能:
data(iris) # Iris data already present in R
# Dataset for part a (first 50 vs. last 100)
iris_a <- iris
iris_a$Species <- as.integer(iris_a$Species)
# Convert list to binary class
for (i in 1:NROW(iris_a$Species)) {if (iris_a$Species[i] != "1") {iris_a$Species[i] <- -1}}
random_sample <- sample(1:NROW(iris),50)
weights_a <- my_logr(iris_a[random_sample,1:4],iris_a$Species[random_sample],1,.1)
我针对 Abu-Mostafa's 仔细检查了我的算法, 如下:
gradient <- -1/N * sum_{1 to N} (training_answer_n * training_Vector_n / (1 + exp(training_answer_n * dot(weight,training_vector_n))))
weight_new <- weight - learn_rate*gradient
我在这里错过了什么吗?
最佳答案
从数学的角度来看,权重向量上的不受约束的大小不会产生唯一的解决方案。当我将这两行添加到分类器函数中时,它分两步收敛:
# Normalize
weights <- weights/norm(weights)
...
# Update weights
weights <- weights_old - learn_rate*gradient
weights <- weights / norm(weights)
我无法让@SimonO101 工作,而且我没有将这段代码用于实际工作(有像
glm
这样的内置函数),所以我理解的循环就足够了。整个函数如下:
# Logistic regression
# Takes training example vector, output vector, learn rate scalar, and convergence delta limit scalar
my_logr <- function(training_examples,training_outputs,learn_rate,conv_lim) {
# Initialize gradient vector
gradient <- as.vector(rep(0,NCOL(training_examples)))
# Difference between weights
del_weights <- as.matrix(1)
# Weights
weights <- as.matrix(runif(NCOL(training_examples)))
weights_old <- as.matrix(rep(0,NCOL(training_examples)))
# Normalize
weights <- weights/norm(weights)
# Compute gradient
while(norm(del_weights) > conv_lim) {
for (k in 1:NCOL(training_examples)) {
gradient <- gradient - 1/NROW(training_examples)*
((t(training_outputs[k]*training_examples[k,]
/(1+exp(training_outputs[k]*t(weights)%*%as.numeric(training_examples[k,]))))))
}
# gradient <- -1/NROW(training_examples) * sum(training_outputs * training_examples / (1 + exp(training_outputs * weights%*%training_outputs) ) )
# Update weights
weights <- weights_old - learn_rate*gradient
weights <- weights / norm(weights)
del_weights <- as.matrix(weights_old - weights)
weights_old <- weights
print(weights)
}
return(weights)
}
关于r - R中逻辑回归公式的实现,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/15478327/