【量化笔记】配对交易

article/2025/11/5 5:06:47

配对交易的步骤

1. 如何挑选进行配对的股票
2. 挑选好股票对以后,如何制定交易策略,开仓点如何设计
3. 开仓是,两只股票如何进行多空仓对比

股票对的选择

1. 行业内匹配
2. 产业链配对
3. 财务管理配对

最小距离法

配对交易需要对股票价格进行标准化处理。假设 P t i ( t = 0 , 1 , 2 , . . . , T ) P_t^i(t=0,1,2,...,T) Pti(t=0,1,2,...,T)表示股票i在第t天的价格,那么股票i在第t天的单期收益率可以表达为:
r t i = P t i − P t − 1 i P t − 1 i , t = 1 , 2 , 3 , . . . , T r_t^i=\frac{P_t^i-P_{t-1}^{i}}{P_{t-1}^{i}},t=1,2,3,...,T rti=Pt1iPtiPt1i,t=1,2,3,...,T

p ^ t i \hat{p}_t^i p^ti 表示股票i在第t天的标准化价格,学界和业界认为 p ^ t i \hat{p}_t^i p^ti可以有这t天内的累计收益率来计算:
p ^ t i = ∏ τ = 1 t ( 1 + r τ i ) \hat{p}^i_t=\prod_{\tau=1}^t (1+r_\tau^i) p^ti=τ=1t(1+rτi)

假设有股票X,Y,则我们可以计算二者之间的标准化价差之平方和 S S D X , Y SSD_{X,Y} SSDX,Y

S S D X , Y = ∑ t = 1 T ( p ^ t X − p ^ t Y ) 2 \\ SSD_{X,Y}=\sum_{t=1}^T (\hat{p}_t^X-\hat{p}_t^Y)^2 SSDX,Y=t=1T(p^tXp^tY)2

下面使用python计算SSD

import pandas as pd
sh=pd.read_csv('sh50p.csv',index_col='Trddt')
sh.index=pd.to_datetime(sh.index)
sh.head()
600000600010600015600016600018600028600030600036600048600050...601688601766601800601818601857601901601985601988601989601998
Trddt
2010-01-049.9972.2606.5414.6274.7758.44517.92713.1746.4436.703...-4.991--11.284--3.0554.5916.498
2010-01-0510.0722.2506.7064.7084.7838.49418.80413.1886.2436.937...-4.982--11.499--3.0904.5736.578
2010-01-069.8742.2556.4614.6154.7338.31118.58612.9136.2376.787...-4.947--11.342--3.0554.6276.393
2010-01-079.6522.2016.3284.4934.6028.09118.13312.5786.2406.590...-4.875--11.267--2.9984.5736.176
2010-01-089.7612.2116.3704.5394.6188.00518.48312.5786.3226.674...-4.902--11.135--3.0124.5256.232

5 rows × 50 columns

# 定义配对形成期
formStart='2014-01-01'
formEnd='2015-01-01'
# 形成期数据
shform=sh[formStart:formEnd]
shform.head(2)
600000600010600015600016600018600028600030600036600048600050...601688601766601800601818601857601901601985601988601989601998
Trddt
2014-01-028.3072.3606.3716.1835.0314.11712.2889.7195.1563.146...8.5344.8693.7812.3837.2455.91-2.3335.6173.634
2014-01-038.1382.5946.1876.0874.9064.04311.9379.5205.1123.088...8.2934.7913.7252.3567.2175.91-2.2895.5173.578

2 rows × 50 columns

# 提取中国银行 601988 的收盘价数据
PAf=shform['601988']
# 提取浦发银行 600000 的收盘价数据
PBf = shform['600000']
#合并两只股票
pairf=pd.concat([PAf,PBf],axis=1)
len(pairf)
245
import numpy as np
# 构造标准化价格之差平方累计SSD函数
def SSD(priceX,priceY):if priceX is None or priceY is None:print('缺少价格序列')returnX = (priceX-priceX.shift(1))/priceX.shift(1)[1:]returnY = (priceY-priceY.shift(1))/priceY.shift(1)[1:]standardX=(returnX+1).cumprod()standardY=(returnY+1).cumprod()SSD = np.sum((standardX-standardY)**2)return SSD
dis=SSD(PAf,PBf)
dis
0.47481704588389073
SSD_rst=[]
for i in shform.columns:for j in shform.columns:if i==j:continue;A=shform[i]B=shform[j]try:num=SSD(A,B)except:continue# print(i,j,num)try:lst=[i,j,num]except:continueSSD_rst.append(lst)
SSDdf=pd.DataFrame(SSD_rst)
SSDdf.head()
012
06000006000107.366730
16000006000150.668806
26000006000162.563524
36000006000187.823916
46000006000283.976679
SSDdf.sort_values(by=2)
012
1026000156011660.245662
9776011666000150.245662
14356018576013980.329987
12446013986018570.329987
3876000506019880.422743
14526019886000500.422743
9846011666000500.446014
3756000506011660.446014
14436019886000000.474817
366000006019880.474817
3006000366013180.485505
10996013186000360.485505
14456019886000150.519690
1146000156019880.519690
9826011666000360.533362
2976000366011660.533362
3536000506000150.545751
866000156000500.545751
10116011666019880.546044
14686019886011660.546044
10026011666013180.568370
11176013186011660.568370
3036000366013980.653760
12166013986000360.653760
9486010886001110.656164
4916001116010880.656164
786000156000000.668806
16000006000150.668806
3816000506013980.704233
12186013986000500.704233
............
47960011160010964.595128
44060010960011164.595128
143460185760139065.941405
120560139060185765.941405
94760108860010967.396051
45260010960108867.396051
118660139060058571.932721
65360058560139071.932721
120460139060176672.876730
139560176660139072.876730
117460139060001873.037316
18560001860139073.037316
68960063760118675.475072
107060118660063775.475072
67460063760010978.702164
44560010960063778.702164
96560108860139079.636906
119460139060108879.636906
118260139060011181.322417
49760011160139081.322417
80960088760139082.309679
119060139060088782.309679
106760118660051887.971590
57260051860118687.971590
44260010960051896.995893
55760051860010996.995893
69260063760139097.687461
118760139060063797.687461
1184601390600518111.693514
575600518601390111.693514

1560 rows × 3 columns

协整模型

协整模型指如果X股票的对数价格是非平稳时间序列,且对数价格的差分序列是平稳的,责成X股票的对数价格是一阶单整序列。
l o g ( P t X ) − l o g ( P t − 1 X ) = l o g ( P t X P t − 1 X ) = l o g ( 1 + r t X ) = r t X log(P_t^X)-log(P_{t-1}^X)=log(\frac{P_t^X}{P_{t-1}^X}) \\=log(1+r_t^X) \\~=r_t^X log(PtX)log(Pt1X)=log(Pt1XPtX)=log(1+rtX) =rtX

下面计算中国银行和浦发银行是否是一阶单整序列

from arch.unitroot import ADF
import numpy as np
PAf.head()
Trddt
2014-01-02    2.333
2014-01-03    2.289
2014-01-06    2.262
2014-01-07    2.253
2014-01-08    2.244
Name: 601988, dtype: float64
PAflog=np.log(PAf)
adfA=ADF(PAflog)
print(adfA.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                  3.409
P-value                         1.000
Lags                               12
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.

统计量较大,我们接受原假设,原序列存在单位根,所以中国银行的收盘价的对数序列是非平稳序列

#对数差分
retA=PAflog.diff()[1:]
retA.head()
Trddt
2014-01-03   -0.019040
2014-01-06   -0.011866
2014-01-07   -0.003987
2014-01-08   -0.004003
2014-01-09   -0.004019
Name: 601988, dtype: float64
adfretA=ADF(retA)
print(adfretA.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                 -4.571
P-value                         0.000
Lags                               11
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.

统计量较小,所以我们拒绝原假设,对数差分序列没有单位根,所以中国银行的对数差分序列是平稳序列

由此说明中国银行的对数价格序列是一阶单整序列

# 对浦发银行进行检验
PBflog=np.log(PBf)
adfB=ADF(PBflog)
print(adfB.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                  2.392
P-value                         0.999
Lags                               12
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.

统计量较大,我们接受原假设,原序列存在单位根,所以浦发银行的收盘价的对数序列是非平稳序列

#对数差分序列
retB = PBflog.diff()[1:]
adfretB=ADF(retB)
print(adfretB.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                 -3.888
P-value                         0.002
Lags                               11
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.

统计量较小,所以拒绝原假设,不存在单位根,所以浦发银行的对数差分序列是平稳序列

由此说明浦发银行的对数序列是一阶单整序列

#绘制对数时序图
import matplotlib.pyplot as plt
PAflog.plot(label='ZGYH',style='--')
PBflog.plot(label='PFYH',style='-')
plt.legend(loc='upper left')
plt.title("中国银行和浦发银行的对数价格时序图")
Text(0.5, 1.0, '中国银行和浦发银行的对数价格时序图')

在这里插入图片描述

retA.plot(label='ZGYH')
retB.plot(label='PFYH')
plt.legend(loc='lower left')
<matplotlib.legend.Legend at 0x1c238d0eb8>

在这里插入图片描述

检验两个对数序列的协整性

方式是对两个序列进行线性拟合,对残差进行检验,如果残差序列是平稳的,说明两个对数序列具有协整性

import statsmodels.api as sm
model=sm.OLS(PBflog,sm.add_constant(PAflog))
/Users/yaochenli/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py:2389: FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.return ptp(axis=axis, out=out, **kwargs)
results=model.fit()
print(results.summary())
                            OLS Regression Results                            
==============================================================================
Dep. Variable:                 600000   R-squared:                       0.949
Model:                            OLS   Adj. R-squared:                  0.949
Method:                 Least Squares   F-statistic:                     4560.
Date:                Fri, 23 Aug 2019   Prob (F-statistic):          1.83e-159
Time:                        17:40:05   Log-Likelihood:                 509.57
No. Observations:                 245   AIC:                            -1015.
Df Residuals:                     243   BIC:                            -1008.
Df Model:                           1                                         
Covariance Type:            nonrobust                                         
==============================================================================coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const          1.2269      0.015     83.071      0.000       1.198       1.256
601988         1.0641      0.016     67.531      0.000       1.033       1.095
==============================================================================
Omnibus:                       19.538   Durbin-Watson:                   0.161
Prob(Omnibus):                  0.000   Jarque-Bera (JB):               13.245
Skew:                           0.444   Prob(JB):                      0.00133
Kurtosis:                       2.286   Cond. No.                         15.2
==============================================================================Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

结果比较显著

对残差进行平稳性检验

#提取回归截距
alpha=results.params[0]
beta =results.params[1]
spread=PBflog-beta*PAflog-alpha
spread.head()
Trddt
2014-01-02   -0.011214
2014-01-03   -0.011507
2014-01-06    0.006511
2014-01-07    0.005361
2014-01-08    0.016112
dtype: float64
spread.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x1c23a44470>

在这里插入图片描述

# 价差序列单位根检验
# 因为残差的均值是0,所以trend设为nc
adfSpread=ADF(spread,trend='nc')
print(adfSpread.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                 -3.199
P-value                         0.001
Lags                                0
-------------------------------------Trend: No Trend
Critical Values: -2.57 (1%), -1.94 (5%), -1.62 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.

统计量较小,所以拒绝原假设,认为残差是平稳的

配对交易策略的制定

最小距离法

计算形成期内标准化的价格序列差 P ^ t X − P ^ t Y \hat{P}^X_t-\hat{P}^Y_t P^tXP^tY的平均值 μ \mu μ和标准差 σ \sigma σ,设定交易信号出发点为 μ + − 2 σ \mu+-2\sigma μ+2σ交易期为6个月。由于浦发银行和中国银行为银行业股票,价格比较稳定,因此我们设定交易期内价差超过 μ + − 1.2 σ \mu+-1.2\sigma μ+1.2σ时触发交易信号。

#最短距离法交易策略
#中国银行的标准化价格
standardA=(1+retA).cumprod()
#浦发银行的标准化价格
standardB=(1+retB).cumprod()
#标准价差
SSD_pair=standardB-standardA
SSD_pair.head()
Trddt
2014-01-03   -0.001514
2014-01-06    0.015407
2014-01-07    0.013962
2014-01-08    0.024184
2014-01-09    0.037629
dtype: float64
meanSSD_pair = np.mean(SSD_pair)
sdSSD_pair=np.std(SSD_pair)
thresholdUP=meanSSD_pair+1.2*sdSSD_pair
thresholdDOWN=meanSSD_pair-1.2*sdSSD_pair
SSD_pair.plot()
plt.axhline(y=meanSSD_pair,color='black')
plt.axhline(y=thresholdUP,color='blue')
plt.axhline(y=thresholdDOWN,color='red')
<matplotlib.lines.Line2D at 0x1c24027860>

在这里插入图片描述

在交易期当价差上穿阀值的时候反向开仓,回归平均线左右的时候平仓,当价差下穿阀值线的时候正向开仓,在价差回归平均线左右的时候平仓。

#交易期
tradStart='2015-01-01'
tradEnd='2015-06-30'
PAt=sh.loc[tradStart:tradEnd,'601988']
PBt=sh.loc[tradStart:tradEnd,'600000']
def spreadCal(x,y):retx=(x-x.shift(1))/x.shift(1)[1:]rety=(y-y.shift(1))/y.shift(1)[1:]standardX=(1+retx).cumprod()standardY=(1+rety).cumprod()spread=standardY-standardXreturn spread
TradSpread=spreadCal(PAt,PBt).dropna()
TradSpread.describe()
count    118.000000
mean       0.001064
std        0.054323
min       -0.127050
25%       -0.028249
50%        0.005682
75%        0.041375
max        0.100249
dtype: float64
TradSpread.plot()
plt.axhline(y=meanSSD_pair,color='black')
plt.axhline(y=thresholdUP,color='blue')
plt.axhline(y=thresholdDOWN,color='red')
<matplotlib.lines.Line2D at 0x1c240f9a58>

在这里插入图片描述

协整模型

构建PairTrading类

import re
import pandas as pd
import numpy as np
from arch.unitroot import ADF
import statsmodels.api as sm
class PairTrading:#计算两个股票之间的距离def SSD(self,priceX,priceY):if priceX is None or priceY is None:print('缺少价格序列')returnreturnX=(priceX-priceX.shift(1))/priceX.shift(1)[1:]returnY=(priceY-priceY.shift(1))/priceY.shift(1)[1:]standardX=(1+returnX).cumprod()standardY=(1+returnY).cumprod()SSD=np.sum((standardY-standardX)**2)return SSD#计算标准价差序列def SSDSpread(self,priceX,priceY):if priceX is None or priceY is None:print('缺少价格徐磊')returnreturnX=(priceX-priceX.shift(1))/priceX.shift(1)[1:]returnY=(priceY-priceY.shift(1))/priceY.shift(1)[1:]standardX=(1+returnX).cumprod()standardY=(1+returnY).cumprod()spread=standardY-standardXreturn spread#判断是否是协整序列,并返回协整序列的线性回归参数def cointegration(self,priceX,priceY):if priceX is None or priceY is None:print('缺少价格序列.')priceX=np.log(priceX)priceY=np.log(priceY)results=sm.OLS(priceY,sm.add_constant(priceX)).fit()resid=results.residadfSpread=ADF(resid)if adfSpread.pvalue>=0.05:print('''交易价格不具有协整关系.P-value of ADF test: %fCoefficients of regression:Intercept: %fBeta: %f''' % (adfSpread.pvalue, results.params[0], results.params[1]))return(None)else:print('''交易价格具有协整关系.P-value of ADF test: %fCoefficients of regression:Intercept: %fBeta: %f''' % (adfSpread.pvalue, results.params[0], results.params[1]))return(results.params[0], results.params[1])#计算协整序列差def CointegrationSpread(self,priceX,priceY,formPeriod,tradePeriod):if priceX is None or priceY is None:print('缺少价格序列.')if not (re.fullmatch('\d{4}-\d{2}-\d{2}:\d{4}-\d{2}-\d{2}',formPeriod)or re.fullmatch('\d{4}-\d{2}-\d{2}:\d{4}-\d{2}-\d{2}',tradePeriod)):print('形成期或交易期格式错误.')formX=priceX[formPeriod.split(':')[0]:formPeriod.split(':')[1]]formY=priceY[formPeriod.split(':')[0]:formPeriod.split(':')[1]]coefficients=self.cointegration(formX,formY)if coefficients is None:print('未形成协整关系,无法配对.')else:spread=(np.log(priceY[tradePeriod.split(':')[0]:tradePeriod.split(':')[1]])-coefficients[0]-coefficients[1]*np.log(priceX[tradePeriod.split(':')[0]:tradePeriod.split(':')[1]]))return(spread)#计算边界def calBound(self,priceX,priceY,method,formPeriod,width=1.5):if not (re.fullmatch('\d{4}-\d{2}-\d{2}:\d{4}-\d{2}-\d{2}',formPeriod)or re.fullmatch('\d{4}-\d{2}-\d{2}:\d{4}-\d{2}-\d{2}',tradePeriod)):print('形成期或交易期格式错误.')if method=='SSD':spread=self.SSDSpread(priceX[formPeriod.split(':')[0]:formPeriod.split(':')[1]],priceY[formPeriod.split(':')[0]:formPeriod.split(':')[1]])            mu=np.mean(spread)sd=np.std(spread)UpperBound=mu+width*sdLowerBound=mu-width*sdreturn(UpperBound,LowerBound)elif method=='Cointegration':spread=self.CointegrationSpread(priceX,priceY,formPeriod,formPeriod)mu=np.mean(spread)sd=np.std(spread)UpperBound=mu+width*sdLowerBound=mu-width*sdreturn(UpperBound,LowerBound)else:print('不存在该方法. 请选择"SSD"或是"Cointegration".')
formPeriod='2014-01-01:2015-01-01'
tradePeriod='2015-01-01:2015-06-30'
priceA=sh['601988']
priceB=sh['600000']
priceAf=priceA[formPeriod.split(':')[0]:formPeriod.split(':')[1]]
priceBf=priceB[formPeriod.split(':')[0]:formPeriod.split(':')[1]]
priceAt=priceA[tradePeriod.split(':')[0]:tradePeriod.split(':')[1]]
priceBt=priceB[tradePeriod.split(':')[0]:tradePeriod.split(':')[1]]
pt=PairTrading()
SSD=pt.SSD(priceAf,priceBf)
SSD
0.47481704588389073
SSDspread=pt.SSDSpread(priceAf,priceBf)
SSDspread.describe()
SSDspread.head()
Trddt
2014-01-02         NaN
2014-01-03   -0.001484
2014-01-06    0.015385
2014-01-07    0.013946
2014-01-08    0.024184
dtype: float64
coefficients=pt.cointegration(priceAf,priceBf)
coefficients
交易价格具有协整关系.P-value of ADF test: 0.020415Coefficients of regression:Intercept: 1.226852Beta: 1.064103(1.2268515742404387, 1.0641034525888144)
CoSpreadF=pt.CointegrationSpread(priceA,priceB,formPeriod,formPeriod)
CoSpreadF.head()
交易价格具有协整关系.P-value of ADF test: 0.020415Coefficients of regression:Intercept: 1.226852Beta: 1.064103Trddt
2014-01-02   -0.011214
2014-01-03   -0.011507
2014-01-06    0.006511
2014-01-07    0.005361
2014-01-08    0.016112
dtype: float64
CoSpreadTr=pt.CointegrationSpread(priceA,priceB,formPeriod,tradePeriod)
CoSpreadTr.describe()
交易价格具有协整关系.P-value of ADF test: 0.020415Coefficients of regression:Intercept: 1.226852Beta: 1.064103count    119.000000
mean      -0.037295
std        0.052204
min       -0.163903
25%       -0.063038
50%       -0.033336
75%        0.000503
max        0.057989
dtype: float64
bound=pt.calBound(priceA,priceB,'Cointegration',formPeriod,width=1.2)
bound
交易价格具有协整关系.P-value of ADF test: 0.020415Coefficients of regression:Intercept: 1.226852Beta: 1.064103(0.03627938704534019, -0.03627938704533997)
#配对交易实测
#提取形成期数据
formStart='2014-01-01'
formEnd='2015-01-01'
PA=sh['601988']
PB=sh['600000']PAf=PA[formStart:formEnd]
PBf=PB[formStart:formEnd]
#形成期协整关系检验
#一阶单整检验
log_PAf=np.log(PAf)
adfA=ADF(log_PAf)
print(adfA.summary().as_text())
adfAd=ADF(log_PAf.diff()[1:])
print(adfAd.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                  3.409
P-value                         1.000
Lags                               12
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.Augmented Dickey-Fuller Results   
=====================================
Test Statistic                 -4.571
P-value                         0.000
Lags                               11
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.
log_PBf=np.log(PBf)
adfB=ADF(log_PBf)
print(adfB.summary().as_text())
adfBd=ADF(log_PBf.diff()[1:])
print(adfBd.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                  2.392
P-value                         0.999
Lags                               12
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.Augmented Dickey-Fuller Results   
=====================================
Test Statistic                 -3.888
P-value                         0.002
Lags                               11
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.
#协整关系检验
model=sm.OLS(log_PBf,sm.add_constant(log_PAf)).fit()
model.summary() 
OLS Regression Results
Dep. Variable:600000 R-squared: 0.949
Model:OLS Adj. R-squared: 0.949
Method:Least Squares F-statistic: 4560.
Date:Fri, 23 Aug 2019 Prob (F-statistic):1.83e-159
Time:17:43:03 Log-Likelihood: 509.57
No. Observations: 245 AIC: -1015.
Df Residuals: 243 BIC: -1008.
Df Model: 1
Covariance Type:nonrobust
coefstd errtP>|t|[0.0250.975]
const 1.2269 0.015 83.071 0.000 1.198 1.256
601988 1.0641 0.016 67.531 0.000 1.033 1.095
Omnibus:19.538 Durbin-Watson: 0.161
Prob(Omnibus): 0.000 Jarque-Bera (JB): 13.245
Skew: 0.444 Prob(JB): 0.00133
Kurtosis: 2.286 Cond. No. 15.2


Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
alpha=model.params[0]
alpha 
1.2268515742404387
beta=model.params[1]
beta 
1.0641034525888144
#残差单位根检验
spreadf=log_PBf-beta*log_PAf-alpha
adfSpread=ADF(spreadf)
print(adfSpread.summary().as_text())
   Augmented Dickey-Fuller Results   
=====================================
Test Statistic                 -3.193
P-value                         0.020
Lags                                0
-------------------------------------Trend: Constant
Critical Values: -3.46 (1%), -2.87 (5%), -2.57 (10%)
Null Hypothesis: The process contains a unit root.
Alternative Hypothesis: The process is weakly stationary.
mu=np.mean(spreadf)
sd=np.std(spreadf)
#设定交易期tradeStart='2015-01-01'
tradeEnd='2015-06-30'

PAt=PA[tradeStart:tradeEnd]
PBt=PB[tradeStart:tradeEnd]
CoSpreadT=np.log(PBt)-beta*np.log(PAt)-alpha
CoSpreadT.describe()
count    119.000000
mean      -0.037295
std        0.052204
min       -0.163903
25%       -0.063038
50%       -0.033336
75%        0.000503
max        0.057989
dtype: float64
CoSpreadT.plot()
plt.title('交易期价差序列(协整配对)')
plt.axhline(y=mu,color='black')
plt.axhline(y=mu+0.2*sd,color='blue',ls='-',lw=2)
plt.axhline(y=mu-0.2*sd,color='blue',ls='-',lw=2)
plt.axhline(y=mu+1.5*sd,color='green',ls='--',lw=2.5)
plt.axhline(y=mu-1.5*sd,color='green',ls='--',lw=2.5)
plt.axhline(y=mu+2.5*sd,color='red',ls='-.',lw=3) 
plt.axhline(y=mu-2.5*sd,color='red',ls='-.',lw=3)
<matplotlib.lines.Line2D at 0x1c2431b470>

在这里插入图片描述

level=(float('-inf'),mu-2.5*sd,mu-1.5*sd,mu-0.2*sd,mu+0.2*sd,mu+1.5*sd,mu+2.5*sd,float('inf'))
prcLevel=pd.cut(CoSpreadT,level,labels=False)-3
prcLevel.head()
Trddt
2015-01-05   -1
2015-01-06   -2
2015-01-07   -3
2015-01-08   -2
2015-01-09   -3
dtype: int64
def TradeSig(prcLevel):n=len(prcLevel)signal=np.zeros(n)for i in range(1,n):#反向建仓if prcLevel[i-1]==1 and prcLevel[i]==2:signal[i]=-2#反向平仓elif prcLevel[i-1]==1 and prcLevel[i]==0:signal[i]=2#强制平仓elif prcLevel[i-1]==2 and prcLevel[i]==3:signal[i]=3#正向建仓elif prcLevel[i-1]==-1 and prcLevel[i]==-2:signal[i]=1#正向平仓elif prcLevel[i-1]==-1 and prcLevel[i]==0:signal[i]=-1#强制平仓elif prcLevel[i-1]==-2 and prcLevel[i]==-3:signal[i]=-3return(signal)
signal=TradeSig(prcLevel)
position=[signal[0]]
ns=len(signal)
for i in range(1,ns):position.append(position[-1])#正向建仓if signal[i]==1:position[i]=1#反向建仓elif signal[i]==-2:position[i]=-1#正向平仓elif signal[i]==-1 and position[i-1]==1:position[i]=0#反向平仓elif signal[i]==2 and position[i-1]==-1:position[i]=0#强制平仓elif signal[i]==3:position[i]=0elif signal[i]==-3:position[i]=0
position=pd.Series(position,index=CoSpreadT.index)
position.tail()
Trddt
2015-06-24    0.0
2015-06-25    0.0
2015-06-26    0.0
2015-06-29    0.0
2015-06-30    0.0
dtype: float64
def TradeSim(priceX,priceY,position):n=len(position)size=1000shareY=size*positionshareX=[(-beta)*shareY[0]*priceY[0]/priceX[0]]cash=[2000]for i in range(1,n):shareX.append(shareX[i-1])cash.append(cash[i-1])if position[i-1]==0 and position[i]==1:shareX[i]=(-beta)*shareY[i]*priceY[i]/priceX[i]cash[i]=cash[i-1]-(shareY[i]*priceY[i]+shareX[i]*priceX[i])elif position[i-1]==0 and position[i]==-1:shareX[i]=(-beta)*shareY[i]*priceY[i]/priceX[i]cash[i]=cash[i-1]-(shareY[i]*priceY[i]+shareX[i]*priceX[i])elif position[i-1]==1 and position[i]==0:shareX[i]=0cash[i]=cash[i-1]+(shareY[i-1]*priceY[i]+shareX[i-1]*priceX[i])elif position[i-1]==-1 and position[i]==0:shareX[i]=0cash[i]=cash[i-1]+(shareY[i-1]*priceY[i]+shareX[i-1]*priceX[i])cash = pd.Series(cash,index=position.index)shareY=pd.Series(shareY,index=position.index)shareX=pd.Series(shareX,index=position.index)asset=cash+shareY*priceY+shareX*priceXaccount=pd.DataFrame({'Position':position,'ShareY':shareY,'ShareX':shareX,'Cash':cash,'Asset':asset})return(account)
account=TradeSim(PAt,PBt,position)
account.tail() 
PositionShareYShareXCashAsset
Trddt
2015-06-240.00.00.05992.5145992.514
2015-06-250.00.00.05992.5145992.514
2015-06-260.00.00.05992.5145992.514
2015-06-290.00.00.05992.5145992.514
2015-06-300.00.00.05992.5145992.514
account.iloc[:,[1,3,4]].plot(style=['--','-',':'])
plt.title('配对交易账户') 
Text(0.5, 1.0, '配对交易账户')

在这里插入图片描述


http://chatgpt.dhexx.cn/article/p8OSayWV.shtml

相关文章

配对交易——初识统计套利

配对交易是统计套利中的非常经典的策略。众所周知,A股市场无法卖空个股,所以中性化的配对交易策略并不能直接“拿来主义”。但这并不妨碍我们学习配对交易的思想,将卖空改成卖出,构造适合A股市场的策略。下面我们就开始学习吧~ 一、配对交易:统计套利的基石 配对交易是基…

配对交易策略

一、引言 在量化投资领域&#xff0c;既然严格的无风险套利机会少、收益率微薄&#xff0c;实际的执行过程中也不能完全消除风险。那么如果有一种选择&#xff0c;能够稍微放松100%无风险的要求&#xff0c;比如允许有5%的风险&#xff0c;但同时却能够让套利机会增加100%以上…

股票中的情侣——配对交易(附:源码)

什么是配对交易&#xff1f; 配对交易&#xff08;Pairs Trading&#xff09;是指八十年代中期华尔街著名投行Morgan Stanley的数量交易员Nunzio Tartaglia成立的一个数量分析团队提出的一种市场中性投资策略&#xff0c;&#xff0c;其成员主要是物理学家、数学家、以及计算机…

在html中透明度的用法,关于CSS透明度的两种使用方法以及优缺点

关于CSS透明度的两种使用方法以及优缺点 在建企业网站的过程中&#xff0c;为了提升用户视觉体验度&#xff0c;可能要将网页中的某些部分设置为背景颜色透明&#xff0c;使用css设置背景颜色透明的有两种方法&#xff1a;一种是通过rgba方式设置&#xff0c;另一种是通过backg…

CSS如何设置透明

以下介绍三种方法供参考: 设置方法一: 给对应元素添加background-color: transparent; 设置相应代码: 修改之前 : 修改之后: 设置方法二 : 给对应元素设置opacity:0; ☆☆☆注意 : 1.opacity准确来讲是设置的"不透明度"(即不透明的程度); 2.取值"0~1"之间…

设置CSS透明度的方法

一、css rgba()设置颜色透明度 语法&#xff1a; rgba&#xff08;R,G,B,A&#xff09;;RGBA是代表Red&#xff08;红色&#xff09;Green&#xff08;绿色&#xff09;Blue&#xff08;蓝色&#xff09;和Alpha&#xff08;不透明度&#xff09;三个单词的缩写。RGBA颜色值是…

css透明度兼容问题opacity

CSS3的透明度属性opacity想必大家都已经用的无处不在了。而对于不支持CSS3的浏览器如何进行透明处理&#xff0c;保持浏览器效果的一致&#xff0c;这个估计谁都会写&#xff0c;但是涉及到filter的具体语法含义和各版本写法的不同区别&#xff0c;很多人都搞不准确&#xff0c…

css3透明度渐变

在需要使用透明度渐变的div中添加 linear类即可 <div class"linear" style"widht:500px;height:500px"></div> .linear {background: -webkit-linear-gradient( top, rgba(0, 0, 0, 0),rgba(0, 0, 0, 0.2) ); /* Safari 5.1 - 6 /background…

html中透明度100是,CSS 透明度设置方法及常见问题解析

你对 CSS 中的半透明颜色可能已经有了基础的了解&#xff0c;CSS透明算得上是一种相当流行的技术&#xff0c;但在跨浏览器支持上&#xff0c;对于开发者来说&#xff0c;可以说是一件令人头疼的事情。目前还没有一个通用方法&#xff0c;以确保透明度设置可以在目前使用的所有…

html页面透明度属性,css透明度属性是什么?

css透明度属性是什么&#xff1f;CSS透明度属性是opacity属性。下面本篇文章就来给大家介绍一下CSS 透明度属性--opacity属性。有一定的参考价值&#xff0c;有需要的朋友可以参考一下&#xff0c;希望对大家有所帮助。 CSS 透明度属性--opacity属性 css opacity属性用于设置一…

html中的透明度怎么设置,css透明度怎么设置?css中各种透明度的设置方法总结...

本篇文章给大家介绍一下css中透明度的设置方法,下面我们就来看看具体的内容。 不透明度和透明度 根据定义,CSS中的不透明度和透明度定义了元素的可见性,无论是图像,表格还是RGBA(红绿蓝alpha)颜色值。根据它们的意思,不透明度是元素不透明度或坚固度的度量,而透明度则衡量…

html页面透明度属性,css透明度是什么属性?

css透明度属性指的是opacity属性&#xff1b;opacity属性可以设置一个元素了透明度级别。下面本篇文章就来给大家介绍一下CSS opacity属性&#xff0c;有一定的参考价值&#xff0c;有需要的朋友可以参考一下&#xff0c;希望对大家有所帮助。 css opacity属性用于设置一个元素…

html css表格透明度,【总结】CSS透明度大汇总_html/css_WEB-ITnose

近年来,CSS不透明算得上是一种相当流行的技术,但在跨浏览器支持上,对于开发者来说,可以说是一件令人头疼的事情。目前还没有一个通用方法,以确保透明度设置可以在目前使用的所有浏览器上有效。 这篇汇总主要是提供一些CSS不透明的详细介绍,代码示例和解释,以实现这项有用…

CSS透明度[简述]

CSS透明度 CSS中设置透明度有两种方式: GRBA和opacity. 下面我们就这两种方式进行简要介绍: GRBA 语法如下: rgba(R,G,B,A); rgba只是单纯的设置颜色的透明度,但是标签上的文字不会透明. 即透明元素的子元素不会继承其透明效果. 代码示例如下: <!DOCTYPE html> <…

一文搞懂蓝绿发布、灰度发布和滚动发布

应用程序升级面临最大挑战是新旧业务切换,将软件从测试的最后阶段带到生产环境,同时要保证系统不间断提供服务。 长期以来,业务升级渐渐形成了几个发布策略:蓝绿发布、灰度发布和滚动发布,目的是尽可能避免因发布导致的流量丢失或服务不可用问题。 一、 蓝绿发布 项目逻…

微信小程序的灰度发布

❤️最细微信小程序版本上传、提交审核、发布【建议收藏】❤️ ❤️2021直击大厂前端开发岗位面试题❤️ ❤️效果图如下&#xff0c;如有需要请自取修改【建议收藏】&#xff01;❤️最火前端Web组态软件(可视化)❤️效果图如下&#xff0c;如有需要请自取修改【建议收藏】&…

持续集成和灰度发布

一、持续集成 持续集成&#xff08;Continuous integration&#xff0c;简称CI&#xff09;是一种软件开发实践&#xff0c;即团队开发成员经常集成它们的工作&#xff0c;通常每个成员每天至少集成一次&#xff0c;也就意味着每天可能会发生多次集成。每次集成都通过自动化的构…

灰度发布:灰度很简单,发布很复杂

什么是灰度发布&#xff0c;其要点有哪些&#xff1f; 最近跟几个聊的来的同行来了一次说聚就聚的晚餐&#xff0c;聊了一下最近的工作情况如何以及未来规划等等&#xff0c;酒足饭饱后我们聊了一个话题“灰度发布”。 因为笔者所负责的产品还没有达到他们产品用户的量级上…

互联网产品灰度发布

互联网产品灰度发布 关于2016年5月15日&#xff0c;DevOps成都站&#xff5c;架构与运维峰会活动总结 1. 前言 2 2. 灰度发布定义 5 3. 灰度发布作用 5 4. 灰度发布步骤 5 5. 灰度发布测试方法 6 6. 灰度发布引擎 6 7. 灰度发布常见问题 8 7.1. 以偏概全 8 7.1.1. 问题…

使用 KubeSphere 实现微服务的灰度发布

前言 今天来说一说&#xff0c;在 KubeSphere 中两个 " 小姐姐 " 如何来回切换&#xff0c;这是什么意思哩&#xff1f;其实就是互联网产品中常用的灰度发布方式。 互联网产品需要快速迭代上线&#xff0c;既要保证新功能运行正常&#xff0c;又要保证质量&#xf…