APPENDIX A
MATHEMATICAL MODEL DERIVATIONS
A.1 COMPUTATION OF INCREMENTAL TRANSMISSION LOSS
Derived from the Newton-Raphson power fl ow method, the real and reactive power
mismatch can be written as:
[]
dP d
J
dQ dQ
d
È˘ È˘
=
Í˙ Í˙
Î˚ Î˚
dV
(A.1)
where
21
[]
t
ng+ n
dP
= dP , ... ,dP , dQ , ... ,dQ
dQ
È˘
Í˙
Î˚
T
(A.2)
T
21
[ , ... , , , ... , ]
ng n
d
d d dV dV
dV
d
dd
+
È˘
=
Í˙
Î˚
(A.3)
(2n-g-1) (2n-g-1)
(P/ ) (P/ )
[J]=
(Q/ ) (Q/ )
V
V
∂∂d ∂∂
∂∂d ∂∂
¥
È˘
Í˙
Î˚
(A.4)
n : total number of buses
g : number of generating buses.
The reference bus mismatch is determined:
11
1
PP
=
V
dP
∂∂
∂d
È˘
Í˙
Î˚
(A.5)
where
11 1 1 1 1
2n g+1n
PP P P P P
= ... ...
V V V
∂∂
∂d ∂d ∂d
È˘
È˘
Í˙
Í˙
Î˚
Í˙
Î˚
2
n
(A.6)
1
11
1
-
dP
PP
dP = [J]
dQ
ä V
∂∂
∂∂
È˘
È˘
Í˙
Í˙
Î˚
Î˚
į
(A.7)
1
dP
dP = [t]
dQ
È˘
Í˙
Î˚
(A.8)
where
21
[ ] [ ... , ... ]
t
ng n
t ggb b
+
=
.
..
T
(A.9)
416 Artifi cial Intelligence in Power System Optimization
1
11
-
PP
[t] = [J]
ä V
∂∂
∂∂
È˘
Í˙
Î˚
į
(A.10)
The total transmission loss can be expressed as a function of bus power.
1,
( ,... )
loss loss G G NG
PPPP=
(A.11)
This can be expanded by Taylor series expansion around the initial bus power
0
G
P
as follows:
)dP(P)+P(P+dP)=P(P)=P(PP
GlossGlossGlossGloss
00
(A.12)
lossGlossGloss
dPPPPP +)(=)(
0
(A.13)
The total transmission loss in a network is a function of bus loads and
generation. From the power balance equation, the total transmission loss is
¦
NB
i
iloss
PP
1
(A.14)
¦
NB
i
iloss
PdPdP
1
1
(A.15)
Substituting for dP
1
11
(1 )
NB NB
loss i i i i
iiNG
dP dP dQgb
==+
=+ +
ÂÂ
(A.16)
The incremental transmission loss (ITL
i
) is defi ned as the change in transmission
loss due to a change in generation at the i
th
bus while generation at all other buses
remains constant.
1
loss loss
ii
Gi i
dP dP
ITL
dP dP
g===+
(A.17)
for slack bus ITL
1
= 0.
A.2 AUGMENTED LAGRANGE HOPFIELD FOR OPTIMIZATION
PROBLEMS
A.2.1 Background
Hopfi eld neural networks have been widely used for solving optimization problems
in different fi elds. HNN is a recurrent network type that operates in an unsupervised
manner. The action of a Hopfi eld network is based on the minimization of its energy
function mapped from an optimization problem and the network will converge to a
solution to the problem. One of the advantages of the Hopfi eld network is that it can
effi ciently handle variable limits by its sigmoid function. However, the applications
of Hopfi eld networks to optimization problems are limited to linear constraints and
its convergence rate is very slow. Besides the resulting large number of iterations,
oscillation is the major drawback HNN suffers from.
In this book, a newly improved continuous Hopfi eld neural network, called
augmented Lagrange Hopfi eld network (ALHN), is presented to overcome the
diffi culties of Hopfi eld networks by introducing an augmented Lagrangian function
as energy function of the Hopfi eld network. The advantages of the proposed neural
network as compared to the conventional Hopfi eld network are as follows:
In the proposed neural network, it is not necessary to predefi ne an energy
function associated with penalty factors for mapping the problem into the
Hopfi eld network in order to determine synaptic interconnections between
neurons. Moreover, the penalty factors may lead to constraint mismatch and
local optima if they are not carefully chosen.
As the proposed neural network uses an augmented Lagrangian function
as energy function for the Hopfi eld network, it can effi ciently handle both
equality and inequality constraints of a problem without causing a constraint
mismatch. In addition, the proposed neural network is not limited to problems
with linearized constraints, unlike the Hopfi eld network.
The proposed neural network offers a very fast convergence compared to the
conventional Hopfi eld network and can effectively fi nd a global solution for
a problem.
• The proposed neural network can easily deal with large-scale and complex
optimization problems.
The augmented term in LR can help to damp out oscillation during the
convergence, especially in problems with complex constraints.
The proposed ALHN uses a sub-gradient technique with updating step sizes
that can be easily tuned to each problem while the slope of the sigmoid function
for continuous neurons is fi xed.
The constrained optimization problem is formulated as follows:
)(min
k
xf (A.18)
subject to
0)(
ki
xg
, i = 1, …, M (A.19)
0)( d
kj
xh
, j = 1, …, N (A.20)
max,min, kkk
xxx dd
, k = 1, …, K (A.21)
Appendix A: Mathematical Model Derivations 417

Get Artificial Intelligence in Power System Optimization now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.