15Advanced Deep Learning Using Julia Programming
15.1 Convolutional Neural Network
using Flux
using Flux: @epochs, onehotbatch, onecold, throttle
using Statistics
using Base.Iterators: repeated
# Define your CNN model
function build_model()
return Chain(
Conv((3, 3), 1=>16, relu),
MaxPool((2,2)),
Conv((3, 3), 16=>32, relu),
MaxPool((2,2)),
Conv((3, 3), 32=>64, relu),
MaxPool((2,2)),
flatten,
Dense(64*4*4, 128, relu),
Dense(128, 10),
softmax
) end
# Load your dataset (for example, MNIST)
using Flux.Data.MNIST
imgs = MNIST.images()
labels = MNIST.labels()
# Preprocess your data
X = hcat(float.(reshape.(imgs, :))…) |> gpu
Y = onehotbatch(labels, 0:9) |> gpu
# Split data into training and testing sets
train_indices = 1:50000
test_indices = 50001:60000
train_data = (X[:, train_indices], Y[:, train_indices])
test_data = (X[:, test_indices], Y[:, test_indices])
# Build the model
model = build_model() |> gpu
# Define loss function and optimizer
loss(x, y) = Flux.crossentropy(model(x), y)
optimizer = Flux.ADAM(params(model))
# Train the model
epochs = 10
@epochs epochs Flux.train!(loss, train_data, optimizer, cb = throttle(() ->
println(“Loss: “, loss(X, Y)), 10))
# Test the model
accuracy(x, y) = mean(onecold(model(x)) .== onecold(y))
acc = accuracy(test_data…)
println(“Test accuracy: $acc”)
15.2 Deep Neural Network Implementation
using Flux
using Flux: @epochs, onecold, onehotbatch
using Statistics
# Define your DNN model
function build_model() ...
Get Machine Learning for Industrial Applications now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.