Linear Regression Logistic Regression

article/2025/10/1 17:32:21

Linear Regression & Logistic Regression

Keywords List

keywords you may encounter when exploring LR(Logistic Regression), or a bigger concept – ML(machine learining):

监督学习 损失函数 梯度下降 学习率 激活函数 神经网络

Let’s kick off with Linear Regression

let’s dive into today’s topic in a data-oriented perspective·

So, we have massive data accumulated during last decades ,and it cannot help you with anything, doesnt make any sense at all, here comes the question, how do we make use of it?

Let’s start with a specific dataset, Boston housing, I believe we all have the experience to find ourselves a rented room in Beijing, it probably makes sense to you that if a place charges more, there gotta be something good with the place , what factors make people pay more? the NO.1 factor comes up to your mind might be this-- house area, but apparently it’s not rational enough that housing price depends on one single factor, well, in Boston housing dataset, there are 13 factors offered, let’s take a look at them:

EDA(Exploratory Data Analysis)
  • CRIM : 城镇人均犯罪率
  • ZN : 住房用地超过 25000 平方尺的比例
  • INDUS : 住房所在城镇非零售商用土地的比例
  • CHAS : 有关查理斯河的虚拟变量(如果住房位于河边则为1,否则为0 )
  • NOX : 一氧化氮浓度
  • RM : 每栋住宅的房间数
  • AGE : 1940 年以前建成的自住单位的比例
  • DIS : 距离 5 个波士顿的就业中心的加权距离
  • RAD : 距离高速公路的便利指数
  • TAX : 每一万美元的不动产税率
  • PTRATIO: 城镇中的教师学生比例
  • B : 1000(Bk-0.63)^2,其中 Bk 指代城镇中黑人的比例
  • LSTAT : 地区中有多少房东属于低收入人群
  • MEDV : 自住房屋房价中位数(也就是均价)
boston = load_boston()
# print(boston.keys())
# # dict_keys(['data', 'target', 'feature_names', 'DESCR', 'filename'])
# dataframe加载数据
boston_df = pd.DataFrame(boston.data)
boston_df.columns = boston.feature_names
boston_df['PRICE'] = boston.target
print(boston_df.shape)

14 columns, the first 13 columns are independent variables here, MEDV the target, which we need to predict. Here comes our first concept : 监督学习(supervised learning)

监督学习(supervised learning) means samples come with targets or labels, while its antonym 非监督学习(unsupervised learning) means that all you have is independent variables .

We’re gonna check the relationship between the target and the independent variable separately(Detailed in Boston_Housing.py), here are the scatter plots, let’s go through them:

CRIM: 城镇人均犯罪率

image-20210909182857798

ZN: 住宅用地所占比例

image-20210909184703726

INDUS: 城镇中非住宅用地所占比例

image-20210909184730204

CHAS: 虚拟变量,用于回归分析

image-20210909190032719

NOX: 环保指数

image-20210909190143851

RM: 每栋住宅的房间数

image-20210909190741318

AGE: 1940 年以前建成的自住单位的比例

image-20210909190819377

DIS: 距离 5 个波士顿的就业中心的加权距离

image-20210909190836280

RAD: 距离高速公路的便利指数

image-20210909190901581

TAX: 每一万美元的不动产税率

image-20210909190929737

PTRATIO: 城镇中的教师学生比例

image-20210909190949872

B: 城镇中的黑人比例

image-20210909191109981

LSTAT: 地区中有多少房东属于低收入人群

image-20210909191129721

as we can see, MEDV varies linearly with 3 factors : RM(每栋住宅的房间数),LSTAT(地区中有多少房东属于低收入人群),PTRATIO(城镇中的教师学生比例), not every 13 factors, it’s still worth a try building a linear regression model to fit this dataset.

13 factors mean 13 independent variables in a function , 13 independent variables are plenty enough to build up a kingdom in hyperspace, but we are all trapped in 3D world, that’s totally beyond our imagination.

To ease you in, I’m gonna make another simple demo to illustrate, and we gonna learn another 3 keywords mentioned above in this demo –

损失函数(Loss Function), 梯度下降(gradient descend), 学习率(Learning Rate)

These keywords might be confusing for beginners, I hope you can crack them one by one during this demo, let’s check it out.

Simulation Data

# @Project :LR 
# @Python : 
# @Author : nivinanull@163.com
# @File : Linear_simulation.pyimport torch
import matplotlib.pyplot as plt
import sys
# 保证每次参数初始化都相同
torch.manual_seed(10)# 创建训练数据
# rand 包含了从区间[0, 1)的均匀分布中抽取的一组随机数
x = torch.rand(20, 1) * 10  # x data (tensor), shape=(20, 1)
# randn 包含了从标准正态分布(均值为0,方差为1,即高斯白噪声)中抽取的一组随机数
# y = 2x + 5 添加白噪声
y = 2*x + (5 + torch.randn(20, 1))  # y data (tensor), shape=(20, 1)# 首先描绘出散点,提出问题,怎样确定y = wx+b中 w和b的值
# 展示iteration循环迭代效果图
plt.scatter(x.data.numpy(), y.data.numpy())
plt.show()
# sys.exit()

First, we initiate x, then y, which is designed to vary roughly linearly with x. I hope the graph below can help you to catch the idea:

image-20210910195058534

Here is the precondition, we know all the dots’ positions in the graph above, we list four of them:

(0, 5.5), (0.5, 5.8), (1, 7.72), (1.5, 8.3)

I believe anyone here can feel the linear relationship between x, y, but we are not able to write the equation down, let’s turn to computer for help.

(Run the linear_simulation.py)

(Run the linear_simulation.py)

(Run the linear_simulation.py)

大家请注意看参数 斜率w和截距b的变化!!!!!!!!!!!!!!!

大家请注意看参数 斜率w和截距b的变化!!!!!!!!!!!!!!!

大家请注意看参数 斜率w和截距b的变化!!!!!!!!!!!!!!!

(dont know how to make a gif…)

image-20210924184052564 image-20210924184131593 image-20210924184208707 image-20210924184254966 image-20210924184342091

What does the computer do to solve the problem ? we tried two different conditions, loss.data.numpy() < 1, and loss.data.numpy() < 0.5(we only see the first version above), what we see is that the line fits better with this dataset, while what the computer know is that the loss value is going down.

The loss value , obviously, is the value of the loss function(损失函数), and we use gradient descend(梯度下降) to minimize the loss function, they work collaboratively on this job.

# 开始具体介绍以上概念
# 损失函数 确立靶心
# 梯度下降 数据指路
# 学习率调整 快走慢走lr = 0.05  # 学习率
for iteration in range(1000):# 前向传播wx = torch.mul(w, x)y_pred = torch.add(wx, b)# 计算 MSE lossloss = (0.5 * (y - y_pred) ** 2).mean()# 反向传播loss.backward()# 更新参数b.data.sub_(lr * b.grad)w.data.sub_(lr * w.grad)# 清零张量的梯度w.grad.zero_()b.grad.zero_()# 绘图if iteration % 20 == 0:plt.scatter(x.data.numpy(), y.data.numpy())# 颜色,线宽plt.plot(x.data.numpy(), y_pred.data.numpy(), 'r-', lw=5)plt.text(2, 20, 'Loss=%.4f' % loss.data.numpy(), fontdict={'size': 20, 'color':  'red'})plt.xlim(1.5, 10)plt.ylim(8, 28)plt.title("Iteration: {}\nw: {} b: {}".format(iteration, w.data.numpy(), b.data.numpy()))plt.pause(3)# 取loss.data.numpy() < 1 一次# 取loss.data.numpy() < 0.5 一次if loss.data.numpy() < 0.5:break

y^ is a denotation for predicted y(y^代表预测值)

y^ is a denotation for predicted y(y^代表预测值)

y^ is a denotation for predicted y(y^代表预测值)

Another question, how do we define the loss function(损失函数) ? In this case, we dont know the value of w and b at first, what we only know is that with an input x you get an predicted y^ (y^ = wx + b) and a real y, the target. and we want y^ approaches y as close as possible, how do we measure the difference between y^ and y?

image-20210913142425866

https://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/loss-functions/mean-squared-error

I hope the picture above can ring a bell in your head, now we’re done with the Loss function part. Let’s move on to the next item – gradient descend(梯度下降) – a useful math tool to minimize the loss function

You may not be familiar with gradient (梯度), it’s about functions of several variables and their partial derivatives, and the concept of derivative might be a blur either, I recommend you to make an analogy between derivative and velocity which means the rate-of-change of position( If a function gives the position of something as a function of time, the first derivative gives its velocity, and the second derivative gives its acceleration), or the speed of sth in a particular direction , while the gradient represents the slope of the tangent of the graph of the function, more precisely, the gradient points in the direction of the greatest rate of increase of the function and its magnitude is the slope of the graph in that direction, so if we want to minimize the loss function, we take the opposite direction of gradient.

image-20210915162740607

we got the loss function MSE above, we repalce y^ with ( wx + b):
L ( y , y ^ ) = 1 N ∑ i = 0 N ( y i − ( w x i + b ) ) 2 L(y, \hat y) = \frac {1}{N}\sum_{i=0}^{N}(y_i-(wx_i+b))^2 L(y,y^)=N1i=0N(yi(wxi+b))2
then we take partial derivatives of w and b respectively:
− Δ w = − 1 N ∑ i = 0 N 2 x i ( y ^ i − y i ) -\Delta w = - \cfrac{1}{N}\sum_{i=0}^{N}2x_i(\hat y_i - y_i) Δw=N1i=0N2xi(y^iyi)

− Δ b = − 1 N ∑ i = 0 N 2 ( y ^ i − y i ) -\Delta b = - \cfrac{1}{N}\sum_{i=0}^{N}2(\hat y_i - y_i) Δb=N1i=0N2(y^iyi)

(These equations above are not the key point of our class, they are totally math thing, if you can’t trust them, try to prove them after)

For the forward propagation,we only get a predicted y^ equation with parameters w and b, it’s not a real number, and y^ is supposed to approach y as close as possible , then we found a way to measure the loss between y and y^, for the backward propagation, our target is clear,–minimizing the loss function, and we already found a math tool to fill the job, seems like we are all good to go, but how do we start ? we kind of got stuck in a delima here, y^ is an equation with parameters w and b, and w and b are supposed to vary in a way to minimize the loss function, but, as we can see here, y^ appeared in the derivative equations, the forward propagation and the backward propagation rely on each other like a vicious circle.

What do we do to break the vicious circle? we initiate the parameters randomly.

# 随机初始化线性回归参数w, b
w = torch.randn((1), requires_grad=True)
b = torch.zeros((1), requires_grad=True)

now we can update w and b with Δ w and Δb:
w = w + l r ( − Δ w ) w = w + lr(-\Delta w) w=w+lr(Δw)

b = b + l r ( − Δ b ) b = b + lr(-\Delta b) b=b+lr(Δb)

you might notice an unfamiliar symbol ‘lr’, what does that mean?,well, that’s our 4th keyword – learning rate(学习率), learning rate(学习率) means you dont update the parameters one hundred percent at a time with one single batch of data, it determines how large the update or moving step is, there definely will be misleadings in one batch of data, what learning rate(学习率) does is taking one small step a time towards the direction currently, it’s usually a small positive float less than 1 for the same reason. the value of the loss function will usually goes down with many many rounds of iterations. It’s a very important parameter. If lr is too small, then the algorithm might converge very slowly.

image-20210913200726748

‌That reminds me of the story of how the world-known architect Walter Gropius got inspired by an aged lady selling grapes and designed the optimal paths for Disneyland , he suggested to have the seeds scattered all over the place at first, and here we initiate the parameters randomly, visitors are allowed to walk through the lawn until the track of paths is fixed, the idea behind it is to find paths that satisfies the most of the visitors, quite a tough job, as tough as the parameters need to fit most of the data (we dont take overfitting into account here),what we did is minimizing the loss function by moving towards the gradients using batches of data, what he did is following the paths that the lots of visitors trod, lots of visitors chose, people dont adapt their directions every single step though, it’s not a perfect analogy, I still bring it up just hoping you can gain more intuition between direction and gradient.I think it’s time to go back to the Boston housing example, I have some data processing tricks to show you.

(detaied in Boston_Housing.py)

if we still got plenty of time

So, we are all clear about what linear regression does and how the whole thing makes sense, how do we find a way to let the data talk in a classification task. I will make a start to logistic regression(there is a regression in it, but it’s used in classification tasks)

The activation function will do the trick for us, I’ll take the sigmoid activation function for example

An illustration of the signal processing in a sigmoid function. | Download  Scientific Diagram

The first glance at the formula gives us the information that with any real number as an input the function gives back a positive float between 0 and 1, just as the range of probability, so why not we see every output y as a probability of belonging to a classification category.

and the same way with linear regression, we set the loss function first, here is the most used loss function for logistic regression:Cross-Entropy Loss

img

https://blog.csdn.net/yinyu19950811/article/details/81321944

and you might still be confusing with the term activation function, I’d say the key point in it is “nonlinear”, during the linear regression part , you’ve seen that the only two operations used inside the prediction equation were the dot product and a sum. Both are linear operations.

If you add more layers but keep using only linear operations, then adding more layers would have no effect because each layer will always have some correlation with the input of the previous layer. This implies that, for a network with multiple layers, there would always be a network with fewer layers that predicts the same results.

What you want is to find an operation that makes the middle layers sometimes correlate with an input and sometimes not correlate.

You can achieve this behavior by using nonlinear functions. These nonlinear functions are called activation functions.

image-20210915191359552

The single unit activated with a activation function in logistic regression is a basic neuron in neural network, it consists a neural network as a hidden unit。

image-20210915192228083

what does neural network, or deep learning have to do with the brain? At the risk of giving away the punchline, I would say not a whole lot. But let’s take a quick look at why people keep making the analogy between deep learning and the human brain. When you implement a neural network, this is what you do, forward prop and back prop. I think because it’s been difficult to convey intuitions about what these equations are doing really gradient descent on a very complex function, the analogy that is like the brain has become really an oversimplified explanation for what this is doing, but the simplicity of this makes it seductive for people to just say it publicly, as well as, for media to report it, and certainly caught the popular imagination. There is a very loose analogy between, let’s say a logistic regression unit with a sigmoid activation function, and here’s a cartoon of a single neuron in the brain.

image-20210915193605688

In this picture of a biological neuron, this neuron, which is a cell in your brain, receives electric signals from your other neurons, X_1, X_2, X_3, or maybe from other neurons A_1, A_2, A_3, does a simple thresholding computation, and then if this neuron fires, it sends a pulse of electricity down the axon, down this long wire perhaps to other neurons. So, there is a very simplistic analogy between a single neuron in a neural network and a biological neuron-like that shown on the right, but I think that today even neuroscientists have almost no idea what even a single neuron is doing. A single neuron appears to be much more complex than we are able to characterize with neuroscience, and while some of what is doing is a little bit like logistic regression, there’s still a lot about what even a single neuron does that no human today understands. For example, exactly how neurons in the human brain learns, is still a very mysterious process. It’s completely unclear today whether the human brain uses an algorithm, does anything like back propagation or gradient descent or if there’s some fundamentally different learning principle that the human brain uses? So, when I think of deep learning, I think of it as being very good at learning very flexible functions, very complex functions to learn X to Y mappings, to learn input-output mappings in supervised learning. Whereas this is like the brain analogy, maybe that was useful ones. I think the field has moved to the point where that analogy is breaking down and I tend not to use that analogy much anymore. So, that’s it for neural networks and the brain. I do think that maybe the few that computer vision has taken a bit more inspiration from the human brain than other disciplines that also apply deep learning, but I personally use the analogy to the human brain less than I used to

The following pictures are from a research of what layers of neuros do in a CV(Computer Vision) neural network, I hope you are able to gain more intuition about neural network.

image-20210916133214283

image-20210916135931913

image-20210916140140030

image-20210916140227345

image-20210916140322076

https://www.cnblogs.com/peizhe123/p/4641149.html?from=liebao_fast&did=oxuw4omg2vsj8muu5oc7boljkkq2

I guess this is it, and I hope u had a great time with me, Thank You!!!


http://chatgpt.dhexx.cn/article/huMD9xrj.shtml

相关文章

RandomForestClassifier/Regressor

1.集成算法&#xff08;ensemble &#xff09; 1.集成评估器 bagging多个相互独立的评估器&#xff0c;对其预测进行平均或多数表决原则来决定评估器的结果boosting基评估器相关&#xff0c;按顺序一一构建&#xff0c;结合若评估器构成强评估器 2.RandomForestClasifier 1.决策…

Regression 回归

李宏毅老师机器学习课程笔记——Regression 回归 一、理论 问题&#xff1a;已知一批宝可梦样本进化前的特征值x_cp, x_s, x_np, x_w, x_n&#xff0c;预测宝可梦升级后的CP值。模型的目标就是找到一个最合适的函数F(X)Y&#xff1a; 特征条件X包括{x_cp, x_s, x_np, x_w, x_…

线性回归正则化 regularized linear regression

线性回归正则化regularized linear regression 在前几篇博客中介绍了一元线性回归http://blog.csdn.net/u012328159/article/details/50994095和多元线性回归http://blog.csdn.net/u012328159/article/details/51029695等线性回归的知识&#xff0c;具体请参见本人其他博客。但…

# Logistics Regression

目录 一 logistics regression 1 一点介绍 2 评价 3 一点应用 4 代码流程 5 上代码&#xff01;&#xff01;&#xff01; 一 logistics regression 1 一点介绍 逻辑回归是一个分类模型&#xff0c;其突出特点即 模型简单、可解释性强 2 评价 优点 实现简单&#xff0…

Python-Regression

本文归纳整理了线性回归&#xff08;LinearRegression&#xff09;、岭回归&#xff08;Ridge&#xff09;、Lasso回归&#xff08;Lasso&#xff09;、弹性网&#xff08;ElasticNet&#xff09;、K近邻、SVM等10多种机器学习模块sklearn中回归算法调用及实现过程&#xff0c;…

大数据挖掘永恒的定律?符号回归(Symbolic Regressor)

大数据挖掘永恒的定律&#xff1f;符号回归(Symbolic Regressor) 符号回归简介 符号回归是一种机器学习技术&#xff0c;旨在识别一个潜在的数学表达式。它首先建立一个朴素随机公式的总体来表示已知自变量和它们的因变量目标之间的关系&#xff0c;以预测新数据。每一个连续的…

Regression

//李宏毅视频官网&#xff1a;http://speech.ee.ntu.edu.tw/~tlkagk/courses.html 点击此处返回总目录 //邱锡鹏《神经网络与深度学习》官网&#xff1a;https://nndl.github.io 今天主要讲的是Regression。会通过一…

从零接入微信公众号(Java实现 附源码)

目录 从零接入微信公众号 一、微信测试号的申请与关注 1.申请 2.访问域名配置 3.关注 二、测试号快速生成底部导航栏 1.获取 access_token 2.自定义菜单 三、微信公众号网页授权 1.网页授权原理 2.代码实现 1)NetUtil.java 2)Oauth2Token.java 3)WxUserDO.java…

前后端分别部署教程 ---- 第三部分项目部署和挂载到微信公众号

3. 前端 这里介绍的是idea开发工具下的项目打包 3.1 项目打包 一定要在指定项目路径下打开 Terminal&#xff0c;不然会打错项目的包. 右击项目名称&#xff0c;然后点击 Open in Terminal&#xff1a; 在下面的 Terminal窗口中&#xff0c; 输入打包命令&#xff1a; npm …

springboot+h5页面+微信公众号获取微信用户信息

springboot项目&#xff0c;h5页面通过微信公众号获取微信用户信息 最近本人有一个项目需求&#xff0c;微信公众号里点击一个菜单进入一个商城购物系统。 对于在微信公众号还是小白的我来说难度有点大&#xff0c;但是做完后发现也就这样&#xff0c;用多了就熟悉了。下面记录…

【公众号】微信进入公众号链接自动获取授权登陆

文章目录 前言一、准备二、开发2.1 搭建项目2.2 WeixinUtil 工具2.3 回调接口与登陆接口2.4 过滤器自动登陆2.4 其他类2.5 测试 三、总结 前言 最近项目上有一个需求&#xff0c;用户反映每次从微信打开链接都需要手动登陆&#xff0c;比较繁琐&#xff0c;想点开微信连接后自…

java向微信公众号---发送模板和图文消息

微信公众号初次开发 其他操作 项目搭建mavenapplication.ymlyml参数配置微信客户端配置 搭建完成 实现业务模板消息推送准备工作模板消息填写要求图文消息填写要求 推送模板消息推送模板消息结果——效果 推送图文消息推送图文消息工具类问题和结果 获取关注的用户openId既然有…

微信公众号接入天行机器人案例和方法

首先使用的是天行机器人&#xff1a; 1、接入基本原理&#xff1a; https://www.tianapi.com/apiview/47 申请完成后的结果如下 调用对应的接口的参数如下&#xff1a; http://api.tianapi.com/txapi/robot/index?keyae5e9a72c8d4cb1f5e096f7bb4daf1f3&questionrobot…

微信公众号的端口映射及服务开发

一. 开发准备 微信公众号申请---->实名认证---->服务器开发---->绑定服务器 PS: 这里有一点需要注意的就是, 微信开发必须是80端口或者443端口, 如果我们有云服务器主机一切都好办. 但是如果没有我们还有几个备选方案: 1. 花生壳 , net123 : 这两个都需要实名认证(…

[JAVA实现]微信公众号网页授权登录,java开发面试笔试题

我总结出了很多互联网公司的面试题及答案,并整理成了文档,以及各种学习的进阶学习资料,免费分享给大家。 扫描二维码或搜索下图红色VX号,加VX好友,拉你进【程序员面试学习交流群】免费领取。也欢迎各位一起在群里探讨技术。 推荐文章:Java 面试知识点解析;Mysql优化技巧…

后端操作微信公众号

一、功能说明 员工端使用微信公众号完成审批操作&#xff0c;涉及到的功能包含&#xff1a;自定义菜单、授权登录、消息 1、微信公众号一级菜单为&#xff1a;审批列表、审批中心、我的 2、员工关注公众号&#xff0c;员工第一次登录微信公众号&#xff0c;通过微信授权登录进行…

Java基于微信公众号接口实现授权登录源码及原理分析

微信公众号授权登录操作前提必须注册微信公众平台账号&#xff0c;注意的是订阅号不支持授权登录操作&#xff0c;因此对于个人开发者注册的订阅号是无法实现的&#xff0c;必须注册企业号的微信平台账号而具体注册流程就不详细介绍了&#xff0c;有什么疑问可去微信公众号平台…

微信公众号多域名回调系统1.0发布

这是一款基于ThinkPHP6.0框架的微信公众号多域名回调系统。 微信公众号后台默认只能授权2个网页域名&#xff0c;用本系统可突破这个限制&#xff0c;用同一个公众号对接无限多个网站。网站后台支持回调域名白名单的管理&#xff0c;以及登录记录的查看。 本系统还有微信access…

PHP微信扫码关注公众号并授权登录源码

PHP微信扫码登录看起来简单&#xff0c;但做起来有点麻烦&#xff0c;开发起来就会浪费很多的时间。 PHP判断是否首次关注公众号&#xff0c;扫码关注公众号获取微信用户头像、openid和省市等信息源码。 演示体验地址: https://www.skpan.cn/user/login.html 网盘下载地址:…

2023最新微信公众号无限回调系统源码/已修复BUG亲测可用

正文: 测试环境&#xff1a; Nginx 1.20.2 MySQL 5.6.50 PHP-7.2 1.创建站点 2.到根目录上传源码 3.创建数据库并导入 4.修改数据库信息 根目录/config.php 第5&#xff0c;6&#xff0c;7行 5.后台地址域名/admin 账号admin 密码123456 6.修改域名 根目录下 api.php 第…