使用PyTorch搭建ResNet101、ResNet152网络

article/2025/10/6 15:43:04

ResNet18的搭建请移步:使用PyTorch搭建ResNet18网络并使用CIFAR10数据集训练测试
ResNet34的搭建请移步:使用PyTorch搭建ResNet34网络
ResNet34的搭建请移步:使用PyTorch搭建ResNet50网络

参照我的ResNet50的搭建,由于50层以上几乎相同,叠加卷积单元数即可,所以没有写注释。
ResNet101和152的搭建注释可以参照我的ResNet50搭建中的注释
ResNet101和152的训练可以参照我的ResNet18搭建中的训练部分

ResNet101和152可以依旧参照ResNet50的网络图片:
在这里插入图片描述

上代码:

ResNet101的model.py模型:

import torch
import torch.nn as nn
from torch.nn import functional as Fclass DownSample(nn.Module):def __init__(self, in_channel, out_channel, stride):super(DownSample, self).__init__()self.down = nn.Sequential(nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=stride, padding=0, bias=False),nn.BatchNorm2d(out_channel),nn.ReLU(inplace=True))def forward(self, x):out = self.down(x)return outclass ResNet101(nn.Module):def __init__(self, classes_num):            # 指定分类数super(ResNet101, self).__init__()self.pre = nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))# --------------------------------------------------------------------self.layer1_first = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256))self.layer1_next = nn.Sequential(nn.Conv2d(256, 64, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256))# --------------------------------------------------------------------self.layer2_first = nn.Sequential(nn.Conv2d(256, 128, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512))self.layer2_next = nn.Sequential(nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512))# --------------------------------------------------------------------self.layer3_first = nn.Sequential(nn.Conv2d(512, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 256, kernel_size=3, stride=2, padding=1, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(1024))self.layer3_next = nn.Sequential(nn.Conv2d(1024, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(1024))# --------------------------------------------------------------------self.layer4_first = nn.Sequential(nn.Conv2d(1024, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 512, kernel_size=3, stride=2, padding=1, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(2048))self.layer4_next = nn.Sequential(nn.Conv2d(2048, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(2048))# --------------------------------------------------------------------self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Sequential(nn.Dropout(p=0.5),nn.Linear(2048 * 1 * 1, 1000),nn.ReLU(inplace=True),nn.Dropout(p=0.5),nn.Linear(1000, classes_num))def forward(self, x):out = self.pre(x)# --------------------------------------------------------------------layer1_shortcut = DownSample(64, 256, 1)layer1_shortcut.to('cuda:0')layer1_identity = layer1_shortcut(out)out = self.layer1_first(out)out = F.relu(out + layer1_identity, inplace=True)for i in range(2):identity = outout = self.layer1_next(out)out = F.relu(out + identity, inplace=True)# --------------------------------------------------------------------layer2_shortcut = DownSample(256, 512, 2)layer2_shortcut.to('cuda:0')layer2_identity = layer2_shortcut(out)out = self.layer2_first(out)out = F.relu(out + layer2_identity, inplace=True)for i in range(3):identity = outout = self.layer2_next(out)out = F.relu(out + identity, inplace=True)# --------------------------------------------------------------------layer3_shortcut = DownSample(512, 1024, 2)layer3_shortcut.to('cuda:0')layer3_identity = layer3_shortcut(out)out = self.layer3_first(out)out = F.relu(out + layer3_identity, inplace=True)for i in range(22):identity = outout = self.layer3_next(out)out = F.relu(out + identity, inplace=True)# --------------------------------------------------------------------layer4_shortcut = DownSample(1024, 2048, 2)layer4_shortcut.to('cuda:0')layer4_identity = layer4_shortcut(out)out = self.layer4_first(out)out = F.relu(out + layer4_identity, inplace=True)for i in range(2):identity = outout = self.layer4_next(out)out = F.relu(out + identity, inplace=True)# --------------------------------------------------------------------out = self.avg_pool(out)out = out.reshape(out.size(0), -1)out = self.fc(out)return out

ResNet152的model.py模型:

import torch
import torch.nn as nn
from torch.nn import functional as Fclass DownSample(nn.Module):def __init__(self, in_channel, out_channel, stride):super(DownSample, self).__init__()self.down = nn.Sequential(nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=stride, padding=0, bias=False),nn.BatchNorm2d(out_channel),nn.ReLU(inplace=True))def forward(self, x):out = self.down(x)return outclass ResNet152(nn.Module):def __init__(self, classes_num):            # 指定了分类数目super(ResNet152, self).__init__()self.pre = nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))# -----------------------------------------------------------------------self.layer1_first = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256))self.layer1_next = nn.Sequential(nn.Conv2d(256, 64, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256))# -----------------------------------------------------------------------self.layer2_first = nn.Sequential(nn.Conv2d(256, 128, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512))self.layer2_next = nn.Sequential(nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512))# -----------------------------------------------------------------------self.layer3_first = nn.Sequential(nn.Conv2d(512, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 256, kernel_size=3, stride=2, padding=1, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(1024))self.layer3_next = nn.Sequential(nn.Conv2d(1024, 256, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(256),nn.ReLU(inplace=True),nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(1024))# -----------------------------------------------------------------------self.layer4_first = nn.Sequential(nn.Conv2d(1024, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 512, kernel_size=3, stride=2, padding=1, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(2048))self.layer4_next = nn.Sequential(nn.Conv2d(2048, 512, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=False),nn.BatchNorm2d(512),nn.ReLU(inplace=True),nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),nn.BatchNorm2d(2048))# -----------------------------------------------------------------------self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))self.fc = nn.Sequential(nn.Dropout(p=0.5),nn.Linear(2048 * 1 * 1, 1000),nn.ReLU(inplace=True),nn.Dropout(p=0.5),nn.Linear(1000, classes_num))def forward(self, x):out = self.pre(x)# -----------------------------------------------------------------------layer1_shortcut = DownSample(64, 256, 1)# layer1_shortcut.to('cuda:0')layer1_identity = layer1_shortcut(out)out = self.layer1_first(out)out = F.relu(out + layer1_identity, inplace=True)for i in range(2):identity = outout = self.layer1_next(out)out = F.relu(out + identity, inplace=True)# -----------------------------------------------------------------------layer2_shortcut = DownSample(256, 512, 2)# layer2_shortcut.to('cuda:0')layer2_identity = layer2_shortcut(out)out = self.layer2_first(out)out = F.relu(out + layer2_identity, inplace=True)for i in range(7):identity = outout = self.layer2_next(out)out = F.relu(out + identity, inplace=True)# -----------------------------------------------------------------------layer3_shortcut = DownSample(512, 1024, 2)# layer3_shortcut.to('cuda:0')layer3_identity = layer3_shortcut(out)out = self.layer3_first(out)out = F.relu(out + layer3_identity, inplace=True)for i in range(35):identity = outout = self.layer3_next(out)out = F.relu(out + identity, inplace=True)# -----------------------------------------------------------------------layer4_shortcut = DownSample(1024, 2048, 2)# layer4_shortcut.to('cuda:0')layer4_identity = layer4_shortcut(out)out = self.layer4_first(out)out = F.relu(out + layer4_identity, inplace=True)for i in range(2):identity = outout = self.layer4_next(out)out = F.relu(out + identity, inplace=True)# -----------------------------------------------------------------------out = self.avg_pool(out)out = out.reshape(out.size(0), -1)out = self.fc(out)return out

http://chatgpt.dhexx.cn/article/N7dpK8Fs.shtml

相关文章

Java中的数组

数组 1.什么是数组 数组就是存储相同数据类型的一组数据,且长度固定 基本数据类型4类8种:byte/char/short/int/long/float/double/boolean 数组,是由同一种数据类型按照一定的顺序排序的集合,给这个数组起一个名字。是一种数据类型&#…

java输出数组(java输出数组)

多维数组在Java里如何创建多维数组? 这从第四个例子可以看出,它向我们演示了用花括号收集多个new表达式的能力: Integer[][] a4 { { new Integer (1), new Integer (2)}, { new Integer (3), new Integer (4)}, { new Integer (5), new…

java怎么输出数组(Java怎么给数组赋值)

Java中数组输出的三种方式。第一种方式,传统的for循环方式,第二种方式,for each循环,  第三种方式,利用Array类中的toString方法. 定义一个int类型数组,用于输出 int[] array={1,2,3,4,5}; 第一种方式,传统的for循环方式 for(int i=0;i {System.out.println(a[i]); } 第…

数组的输入与输出

前言: 我们知道对一个字符数组进行输入与输出时会用到: 输入:scanf,getchar,gets 输出:printf,putchar,puts 然而可能还有很多的朋友对这些还不是很了解,今天让我们共同学习数组的输入与输出吧。 %c格式是用于输入…

Java二维数组的输出

1. Java二维数组的输出<1> (1) 输出结果右对齐"%5d" public class HelloWorld {public static void main(String[] args){int myArray[ ][ ] { {1,2}, {7,2}, {3,4} };for(int i0; i<3; i){for (int j0; j<2; j)System.out.printf("%5d",my…

Java中数组的输入输出

数组的输入 首先声明一个int型数组 int[] a 或者 int a[] 给数组分配空间 anew int[10]; 和声明连起来就是int[] anew int[10]; 或者是 int a[]new int[10]; 给数组赋值 a[0]1;//0代表的是数组的第1个元素 ,元素下标为0 a[1]1;//1代表的是数组的第2个元素 ,元素下标为0 …

Java 数组的输入输出

Java中要对控制台进行输入操作的话要调用Scanner类&#xff0c;定义一个扫描的对象&#xff0c;例&#xff1a; //要导入java.util.Scanner; Scanner scanner new Scanner(System.in); 这样便打开了输入流&#xff0c;接下来定义数组&#xff1a; int[] n new int[4];//使…

Java中字符串数组的输入与输出

今天刷题遇到一个坑&#xff0c;老是接收不到字符串数组。即用str[i]sc.nextLine();这样的方式去接收数组的话&#xff0c;打印的时候总是会少一个。 import java.util.Scanner;public class test {public static void main(String[] args) {Scanner sc new Scanner(System.i…

java中打印输出数组内容的三种方式

今天输出数组遇到问题&#xff0c;学习一下打印输出数组内容的几种方式 错误示范&#xff1a;System.out.println(array);  //这样输出的是数组的首地址&#xff0c;而不能打印出数组数据。&#xff08;唉&#xff0c;我开始就是这么写的。。。&#xff09; 一维数组&#…

NTP协议之旅

NTP协议之旅 What---啥是NTPWhy---为什么需要NTPHow---NTP实现原理Do---NTP实战使用HCL 华三模拟器进行NTP配置抓包分析 What—啥是NTP NTP是在分布式网络中&#xff0c;进行时钟同步的协议&#xff0c;其具有较高的时间同步精度。所使用的传输层协议为UDP&#xff0c;使用端口…

ntrip协议

https://blog.csdn.net/wandersky0822/article/details/88558456这篇介绍的是RTK精确定位的原理&#xff0c;及影响精确定位的各种条件。 这一篇介绍的就比较细&#xff0c;仅仅介绍RTK 差分信息的 产生 申请与分发。 最近要做一个GPS RTK基站&#xff0c;也就是为RTK客户端提…

Ntrip协议简介

Ntrip通讯协议1.0 1 什么是Ntrip&#xff1f; CORS&#xff08;Continuously Operating Reference Stations&#xff09;就是网络基准站&#xff0c;通过网络收发GPS差分数据。用户访问CORS后&#xff0c;不用单独架设GPS基准站&#xff0c;即可实现GPS流动站的差分定位。 访问…

NTP技术介绍

NTP 简介 NTP&#xff08;Network Time Protocol&#xff0c;网络时间协议&#xff09;是由RFC 1305定义的时间同步协议&#xff0c;用来在分布式时间服务器和客户端之间进行时间同步。NTP基于UDP报文进行传输&#xff0c;使用的UDP端口号为123。 使用NTP的目的是对网络内所…

NTPv4协议解析

前言 本文的撰写基于RFC5905.NTP 是时间网络控制协议&#xff0c;V4版本相交V3版本&#xff0c;修复了V3存在的一些问题。尤其是NTPV4的拓展时间戳鼓励使用浮动双数据类型&#xff0c;这样使得NTP能够更好的支持1ns的场景&#xff0c;轮询间隔也从上一代的最多1024s拓展到了36…

NTP 网络时间协议

目录 基本原理 结构 工作模式 单播C/S模式 对等体模式 广播模式 组播模式 多播模式 NTP访问控制 访问权限 KOD 认证功能 配置 NTP用于在一系列分布式时间服务器与客户端之间同步时钟。基于IP和UDP。NTP报文通过UDP传输&#xff0c;端口号是123. NTP主要应用于网络中…

NTP协议简介

NTP协议简介 一. datec dates分析1. 同步流程2. 缺陷 二. NTP(**Network Time Protocol**)1. NTP概述 [1、2、3、7]2. NTP的时钟同步原理与授时精度分析 [3、1、10]NTP的时钟同步原理NTP的授时精度分析 3. NTP中其它的提高授时精度的措施[1、6]参考资料 三. 对NTP改进以获得更高…

ntp同步详解

一、ntp服务是什么 1.定义 NTP是网络时间协议(Network Time Protocol)&#xff0c;它是用来同步网络中各个计算机的时间的协议。 2.发展 首次记载在Internet Engineering Note之中&#xff0c;其精确度为数百毫秒。 稍后出现了首个时间协议的规范RFC-778&#xff0c;它被命…

NTP协议代码实现

本文将讲解NTP的代码实现和调试过程的一些记录。 首先&#xff0c;进行NTP报文结构体的构建。 //宏定义 #define NTP_TIMESTAMP_DELTA 2208988800ull //number of seconds between 1900 and 1970&#xff0c;1900-1970的时间差 #define SEC_TIME_ZONE (8*60*60) //B…

什么是Ntrip?Ntrip协议简介

文章目录 Ntrip通讯协议1.0Ntrip是什么&#xff1f;Ntrip系统组成NtripServerNtripClient4.1 获取源列表4.2 获取差分数据 其他资料 Ntrip通讯协议1.0 Ntrip是什么&#xff1f; NTRIP是在互联网上进行RTK数据传输的协议。所有的 RTK数据格式&#xff08;NCT&#xff0c;RTCM&…

网络时间协议NTP介绍

定义 网络时间协议NTP&#xff08;Network Time Protocol&#xff09;是TCP/IP协议族里面的一个应用层协议。NTP用于在一系列分布式时间服务器与客户端之间同步时钟。NTP的实现基于IP和UDP。NTP报文通过UDP传输&#xff0c;端口号是123。 目的 随着网络拓扑的日益复杂&#xf…