分类网络:VGG16

article/2025/8/30 5:10:49

VGG16网络的优点:

1.通过堆叠两层3x3的卷积核替代5x5的卷积核, 通过堆叠三层3x3的卷积核替代7x7的卷积核, 原因是拥有相同的感受野,且需要更少的参数:假设输入通道数为C, 则所需要的参数为:

三个3x3的卷积核所需要的参数:3 x 3 x C x C + 3 x 3 x C x C + 3 x 3 x C x C = 27 x C x C

一个7x7的卷积核所需要的参数:7 x 7 x C x C = 49 x C x C

感受野的计算: F(i) = (F(i+1) - 1) x S + K , F(i)为第i层的感受野, S为步距, K为卷积核的大小。

2.相比于前一代的AlexNet网络,大大减少了神经网络计算时使用的参数。

点击下载花分类数据集

在这里插入图片描述

model.py

import torch.nn as nn
import torch# official pretrain weights
model_urls = {'vgg11': 'https://download.pytorch.org/models/vgg11-bbd30ac9.pth','vgg13': 'https://download.pytorch.org/models/vgg13-c768596a.pth','vgg16': 'https://download.pytorch.org/models/vgg16-397923af.pth','vgg19': 'https://download.pytorch.org/models/vgg19-dcbb9e9d.pth'
}class VGG(nn.Module):def __init__(self, features, num_classes=1000, init_weights=False):super(VGG, self).__init__()# 将特征提取和分类分成两个模块self.features = featuresself.classifier = nn.Sequential(nn.Linear(512 * 7 * 7, 4096),nn.ReLU(True),nn.Dropout(p=0.5),nn.Linear(4096, 4096),nn.ReLU(True),nn.Dropout(p=0.5),nn.Linear(4096, num_classes))if init_weights:self._initialize_weights()def forward(self, x):# N x 3 x 224 x 224x = self.features(x)# N x 512 x 7 x 7x = torch.flatten(x, start_dim=1)# N x 512*7*7x = self.classifier(x)return xdef _initialize_weights(self):for m in self.modules():if isinstance(m, nn.Conv2d):# nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')nn.init.xavier_uniform_(m.weight)if m.bias is not None:nn.init.constant_(m.bias, 0)elif isinstance(m, nn.Linear):nn.init.xavier_uniform_(m.weight)# nn.init.normal_(m.weight, 0, 0.01)nn.init.constant_(m.bias, 0)# 特征提取网络
def make_features(cfg: list):layers = []in_channels = 3for v in cfg:if v == "M":layers += [nn.MaxPool2d(kernel_size=2, stride=2)]else:conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)layers += [conv2d, nn.ReLU(True)]in_channels = v# *layers -> 使用非关键字参数将layers中的元素传入Sequential中return nn.Sequential(*layers)# 通过堆叠两层3x3的卷积核替代5x5的卷积核, 通过堆叠三层3x3的卷积核替代7x7的卷积核
# 原因是拥有相同的感受野,且需要更少的参数:假设输入通道数为C, 则所需要的参数为:
# 三个3x3的卷积核:3 x 3 x C x C + 3 x 3 x C x C + 3 x 3 x C x C = 27 x C x C
# 一个7x7的卷积核:7 x 7 x C x C = 49 x C x C
# 感受野的计算: F(i) = (F(i+1) - 1) x S + K       F(i)为第i层的感受野, S为步距, K为卷积核的大小
cfgs = {'vgg11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],'vgg13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],'vgg16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],'vgg19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}def vgg(model_name="vgg16", **kwargs):assert model_name in cfgs, "Warning: model number {} not in cfgs dict!".format(model_name)cfg = cfgs[model_name]model = VGG(make_features(cfg), **kwargs)return modelif __name__ == '__main__':model_name = "vgg16"net = vgg(model_name=model_name, num_classes=5, init_weights=True)

train.py

import os
import jsonimport torch
import torch.nn as nn
from torchvision import transforms, datasets
import torch.optim as optim
from tqdm import tqdmfrom model import vggdef main():device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")print("using {} device.".format(device))data_transform = {"train": transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),"val": transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}data_root = os.path.abspath(os.path.join(os.getcwd(), "../.."))  # get data root pathimage_path = os.path.join(data_root, "data_set", "flower_data")  # flower data set pathassert os.path.exists(image_path), "{} path does not exist.".format(image_path)train_dataset = datasets.ImageFolder(root=os.path.join(image_path, "train"),transform=data_transform["train"])train_num = len(train_dataset)# {'daisy':0, 'dandelion':1, 'roses':2, 'sunflower':3, 'tulips':4}flower_list = train_dataset.class_to_idxcla_dict = dict((val, key) for key, val in flower_list.items())# write dict into json filejson_str = json.dumps(cla_dict, indent=4)with open('class_indices.json', 'w') as json_file:json_file.write(json_str)batch_size = 1nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8])  # number of workersprint('Using {} dataloader workers every process'.format(nw))train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=batch_size, shuffle=True,num_workers=nw)validate_dataset = datasets.ImageFolder(root=os.path.join(image_path, "val"),transform=data_transform["val"])val_num = len(validate_dataset)validate_loader = torch.utils.data.DataLoader(validate_dataset,batch_size=batch_size, shuffle=False,num_workers=nw)print("using {} images for training, {} images for validation.".format(train_num,val_num))# test_data_iter = iter(validate_loader)# test_image, test_label = test_data_iter.next()model_name = "vgg16"net = vgg(model_name=model_name, num_classes=5, init_weights=True)net.to(device)loss_function = nn.CrossEntropyLoss()optimizer = optim.Adam(net.parameters(), lr=0.0001)epochs = 30best_acc = 0.0save_path = './{}Net.pth'.format(model_name)train_steps = len(train_loader)for epoch in range(epochs):# trainnet.train()running_loss = 0.0train_bar = tqdm(train_loader)for step, data in enumerate(train_bar):images, labels = dataoptimizer.zero_grad()outputs = net(images.to(device))loss = loss_function(outputs, labels.to(device))loss.backward()optimizer.step()# print statisticsrunning_loss += loss.item()train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1,epochs,loss)# validatenet.eval()acc = 0.0  # accumulate accurate number / epochwith torch.no_grad():val_bar = tqdm(validate_loader)for val_data in val_bar:val_images, val_labels = val_dataoutputs = net(val_images.to(device))predict_y = torch.max(outputs, dim=1)[1]acc += torch.eq(predict_y, val_labels.to(device)).sum().item()val_accurate = acc / val_numprint('[epoch %d] train_loss: %.3f  val_accuracy: %.3f' %(epoch + 1, running_loss / train_steps, val_accurate))if val_accurate > best_acc:best_acc = val_accuratetorch.save(net.state_dict(), save_path)print('Finished Training')if __name__ == '__main__':torch.cuda.empty_cache()main()

predict.py

import os
import jsonimport torch
import torch.nn as nn
from torchvision import transforms, datasets
import torch.optim as optim
from tqdm import tqdmfrom model import vggdef main():device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")print("using {} device.".format(device))data_transform = {"train": transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),"val": transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}data_root = os.path.abspath(os.path.join(os.getcwd(), "../.."))  # get data root pathimage_path = os.path.join(data_root, "data_set", "flower_data")  # flower data set pathassert os.path.exists(image_path), "{} path does not exist.".format(image_path)train_dataset = datasets.ImageFolder(root=os.path.join(image_path, "train"),transform=data_transform["train"])train_num = len(train_dataset)# {'daisy':0, 'dandelion':1, 'roses':2, 'sunflower':3, 'tulips':4}flower_list = train_dataset.class_to_idxcla_dict = dict((val, key) for key, val in flower_list.items())# write dict into json filejson_str = json.dumps(cla_dict, indent=4)with open('class_indices.json', 'w') as json_file:json_file.write(json_str)batch_size = 1nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8])  # number of workersprint('Using {} dataloader workers every process'.format(nw))train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=batch_size, shuffle=True,num_workers=nw)validate_dataset = datasets.ImageFolder(root=os.path.join(image_path, "val"),transform=data_transform["val"])val_num = len(validate_dataset)validate_loader = torch.utils.data.DataLoader(validate_dataset,batch_size=batch_size, shuffle=False,num_workers=nw)print("using {} images for training, {} images for validation.".format(train_num,val_num))# test_data_iter = iter(validate_loader)# test_image, test_label = test_data_iter.next()model_name = "vgg16"net = vgg(model_name=model_name, num_classes=5, init_weights=True)net.to(device)loss_function = nn.CrossEntropyLoss()optimizer = optim.Adam(net.parameters(), lr=0.0001)epochs = 30best_acc = 0.0save_path = './{}Net.pth'.format(model_name)train_steps = len(train_loader)for epoch in range(epochs):# trainnet.train()running_loss = 0.0train_bar = tqdm(train_loader)for step, data in enumerate(train_bar):images, labels = dataoptimizer.zero_grad()outputs = net(images.to(device))loss = loss_function(outputs, labels.to(device))loss.backward()optimizer.step()# print statisticsrunning_loss += loss.item()train_bar.desc = "train epoch[{}/{}] loss:{:.3f}".format(epoch + 1,epochs,loss)# validatenet.eval()acc = 0.0  # accumulate accurate number / epochwith torch.no_grad():val_bar = tqdm(validate_loader)for val_data in val_bar:val_images, val_labels = val_dataoutputs = net(val_images.to(device))predict_y = torch.max(outputs, dim=1)[1]acc += torch.eq(predict_y, val_labels.to(device)).sum().item()val_accurate = acc / val_numprint('[epoch %d] train_loss: %.3f  val_accuracy: %.3f' %(epoch + 1, running_loss / train_steps, val_accurate))if val_accurate > best_acc:best_acc = val_accuratetorch.save(net.state_dict(), save_path)print('Finished Training')if __name__ == '__main__':torch.cuda.empty_cache()main()

class_indices.json

{"0": "daisy","1": "dandelion","2": "roses","3": "sunflowers","4": "tulips"
}

http://chatgpt.dhexx.cn/article/0ChADDF1.shtml

相关文章

VGG16模型

VGG16模型很好的适用于分类和定位任务 其名称来源于作者所在的牛津大学视觉几何组(Visual Geometry Group)的缩写。 1、结构简洁 VGG由5层卷积层、3层全连接层、softmax输出层构成,层与层之间使用max-pooling(最大化池)分开,所有…

VGG16网络结构与代码

VGG16总共有16层(不包括池化层),13个卷积层和3个全连接层,第一次经过64个卷积核的两次卷积后,采用一次pooling,第二次经过两次128个卷积核卷积后,采用pooling;再经过3次256个卷积核卷…

VGG16模型PyTorch实现

1.VGG16 网络简介 VGG16网络模型在2014年ImageNet比赛上脱颖而出,取得了在分类任务上排名第二,在定位任务上排名第一的好成绩。VGG16网络相比于之前的LexNet以及LeNet网络,在当时的网络层数上达到了空前的程度。 2.网络结构 3.创新点 ① 使…

CNN系列学习之VGG16

前言: CNN系列总结自己学习主流模型的笔记,从手写体的LeNet-5到VGG16再到历年的ImageNet大赛的冠军ResNet50,Inception V3,DenseNet等。重点总结每个网络的设计思想(为了解决什么问题),改进点(是怎么解决这…

vgg16猫狗识别

🍨 本文为🔗365天深度学习训练营 中的学习记录博客🍦 参考文章:365天深度学习训练营-第8周:猫狗识别(训练营内部成员可读)🍖 原作者:K同学啊|接辅导、项目定制 我的环境&…

VGG16-keras 优化

VGG16-keras 优化 优化结果对比VGG16网络结构VGG16网络结构优化自定义loss训练预测 优化结果对比 原始VGG16 普通调优 使用预训练权重 VGG16网络结构 VGG16网络结构优化 1.增加正则化 2.使用BN/GN层(中间层数据的标准化) 3.使用dropout Net.py i…

VGG16论文解读

VGGNET VGG16相比AlexNet的一个改进是采用连续的几个3x3的卷积核代替AlexNet中的较大卷积核(11x11,7x7,5x5)。对于给定的感受野(与输出有关的输入图片的局部大小),采用堆积的小卷积核是优于采用…

VGG16模型详解 and 代码搭建

目录 一. VGG 网络模型 二. 代码复现 1. 网络搭建 2.数据集制作(pkl) 3.源码地址 一. VGG 网络模型 Alexnet是卷积神经网络的开山之作,但是由于卷积核太大,移动步长大,无填充,所以14年提出的VGG网络解决了这一问题。而且VGG网…

动手学习VGG16

VGG 论文 《Very Deep Convolutional Networks for Large-Scale Image Recognition》 论文地址:https://arxiv.org/abs/1409.1556 使用重复元素的网络(VGG) 以学习VGG的收获、VGG16的复现二大部分,简述VGG16网络。 一. 学习VGG的收获 VGG网络明确指…

VGG16

VGG16模型的学习以及源码分析 part one 主要学习参考 pytorch 英文文档VGG16学习笔记VGG16网络原理分析与pytorch实现【深度学习】全面理解VGG16模型VGG模型的pytorch代码实现VGG16源代码详解【论文】 VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION p…

VGG16 - 用于分类和检测的卷积网络

VGG16是由牛津大学的K. Simonyan和A. Zisserman在“用于大规模图像识别的非常深卷积网络”的论文中提出的卷积神经网络模型。 该模型在ImageNet中实现了92.7%的前5个测试精度,这是属于1000个类的超过1400万张图像的数据集。它是ILSVRC-2014提交的着名模型…

VGG-16网络结构详解

VGG,又叫VGG-16,顾名思义就是有16层,包括13个卷积层和3个全连接层,是由Visual Geometry Group组的Simonyan和Zisserman在文献《Very Deep Convolutional Networks for Large Scale Image Recognition》中提出卷积神经网络模型&…

经典卷积神经网络——VGG16

VGG16 前言一、VGG发展历程二、VGG网络模型三、VGG16代码详解1.VGG网络架构2.VGG16网络验证2.读取数据,进行数据增强3.训练模型,测试准确率 四、VGG缺点 前言 我们都知道Alexnet是卷积神经网络的开山之作,但是由于卷积核太大,移动…

VGG16网络模型的原理与实现

VGG 最大的特点就是通过比较彻底地采用 3x3 尺寸的卷积核来堆叠神经网络,这样也加深整个神经网络的深度。这两个重要的改变对于人们重新定义卷积神经网络模型架构也有不小的帮助,至少证明使用更小的卷积核并且增加卷积神经网络的深度,可以更有…

深度学习——VGG16模型详解

1、网络结构 VGG16模型很好的适用于分类和定位任务,其名称来自牛津大学几何组(Visual Geometry Group)的缩写。 根据卷积核的大小核卷积层数,VGG共有6种配置,分别为A、A-LRN、B、C、D、E,其中D和E两种是最…

RK3399平台开发系列讲解(PCI/PCI-E)5.54、PCIE INTx中断机制

文章目录 一、PCIe中断过程二、PCIE 控制器支持的中断三、PCIE 控制器注册中断四、PCIe设备中断号分配沉淀、分享、成长,让自己和他人都能有所收获!😄 📢本篇章将介绍RK3399平台PCIE总线中断INTx相关内容。 一、PCIe中断过程 层级结构为:PCIe设备 => PCIe控制器 =&g…

RK3399—中断

中断是操作系统最常见的事件之一,无论是系统层的“软中断”还是CPU底层的“硬中断”都是编程时常用的。中断的作用之一是充分利用CPU资源,正常情况下,CPU执行用户任务,当外设触发中断产生时,CPU停止当前任务&#xff0…

基于RK3399分析Linux系统下的CPU时钟管理 - 第3篇

1. 时钟系统结构 rockchip的时钟系统代码位于drivers/clk/rockchip,目录整体结构如下: ├── rockchip │ ├── clk.c---------------时钟系统注册 │ ├── clk-cpu.c-----------CPU调频 │ ├── clk-ddr.c-----------DDR调频 │ ├──…

基于RK3399+PID的手持稳定云台的设计与实现

手持稳定云台的主要作用是将外界环境因数引起的相机姿态变化进行隔离。如因操作者运动造成的机体震动、风阻力矩等,为了确保工作中相机的视轴始终保持期望的姿态不动。云台相机要拍摄出高质量的影像最重要的就是保证相机的视轴相对目标保持稳定。因此在相机拍摄的过…

RK3399学习

RK3399学习 韦东山rk3399:http://dev.t-firefly.com/forum-460-1.html firefly官网教程:http://wiki.t-firefly.com/zh_CN/Firefly-RK3399/started.html firefly官网3399资料:http://dev.t-firefly.com/forum-263-1.html 100ask 3399-pc教…