How to Use Optuna to Optimize Algorithm Parameters

For optimization algorithms, there are also some parameters that need to be tested and optimized. Optuna is a hyperparameter optimization framework, it can be used to optimize algorithm parameters.

First, you need to install the optuna library in Python:

1
pip install optuna

Some features of Optuna rely on the scikit-learn library, so you also need to install it:

1
pip install scikit-learn

In order to visualize the optimization process, we also need to install the plotly library:

1
pip install plotly

Let's try it with a simple PSO algorithm as an example. The following PSO code is modified based on this project, adding the optuna_objective function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
import numpy as np
import optuna


def optuna_objective(trial):
# Maximum number of iterations
max_iter = trial.suggest_int('max_iter', 100, 1000)
# Number of particles
particle_num = trial.suggest_int('n_particles', 10, 100)
# Inertia factor
w = trial.suggest_float('w', 0.1, 0.9)
# Learning Factors (Self-Awareness)
c1 = trial.suggest_float('c1', 1.0, 5.0)
# Learning Factors (Social Cognition)
c2 = trial.suggest_float('c2', 1.0, 5.0)

# Number of dimensions
dimension_num = 2

# Use PSO to find the maximum value of the function
result_position, result_cost, best_fitness_each_iter, global_best_fitness_found = pso_solver(max_iter, particle_num,
dimension_num,
w, c1, c2)
return result_cost


# Fitness function
def objective_function(x):
z = np.sin(np.power(1 - x[0], 2) + 2 * x[1] + np.cos(np.power(x[0], 2))) + np.power(np.sin(x[0] + x[1]), 2)
return z


# Particle Swarm Optimization
def pso_solver(max_iter, particle_num, dimension_num, w, c1, c2):
# Set random seeds to ensure reproducibility
np.random.seed(0)

# Parameter initialization
all_particle_positions = np.random.uniform(-5, 5, size=(particle_num, dimension_num))
all_particle_velocities = np.random.uniform(-1, 1, size=(particle_num, dimension_num))
best_position_each_particle = all_particle_positions.copy()
best_fitness_each_particle = [objective_function(pos) for pos in all_particle_positions]

# Record the fitness information during the algorithm iteration process
best_fitness_each_iter = []
global_best_cost_found = []

global_best_index = np.argmax(best_fitness_each_particle)
global_best_fitness = best_fitness_each_particle[global_best_index]

for iter_count in range(max_iter):
# Update particle velocity
for i in range(len(all_particle_positions)):
r1, r2 = np.random.rand(dimension_num), np.random.rand(dimension_num)
all_particle_velocities[i] = w * all_particle_velocities[i] \
+ c1 * r1 * (best_position_each_particle[i] - all_particle_positions[i]) \
+ c2 * r2 * (best_position_each_particle[global_best_index] -
all_particle_positions[i])

# Update Particle Position
all_particle_positions += all_particle_velocities

# Limits the particle's position coordinates to a given range
all_particle_positions = np.clip(all_particle_positions, -5, 5)

# Calculate the fitness of each particle
all_particle_new_fitness = [objective_function(pos) for pos in all_particle_positions]

# Update the optimal position and fitness of each particle
for i in range(len(all_particle_positions)):
if all_particle_new_fitness[i] > best_fitness_each_particle[i]:
best_position_each_particle[i] = all_particle_positions[i].copy()
best_fitness_each_particle[i] = all_particle_new_fitness[i]

# Update the optimal fitness found
if np.max(all_particle_new_fitness) > global_best_fitness:
global_best_index = np.argmax(all_particle_new_fitness)
global_best_fitness = all_particle_new_fitness[global_best_index]

best_fitness_each_iter.append(np.min(all_particle_new_fitness))
global_best_cost_found.append(global_best_fitness)

return best_position_each_particle[global_best_index], global_best_fitness, \
best_fitness_each_iter, global_best_cost_found


# Optimizing parameters using Optuna
if __name__ == "__main__":
study = optuna.create_study(direction='maximize')
# Each trial runs the complete PSO algorithm
study.optimize(optuna_objective, n_trials=20)

# Output optimal parameters
print("Best Params:", study.best_params)
print("Best Value:", study.best_value)

# Visualization of parameter optimization process
optuna.visualization.plot_optimization_history(study).show()
optuna.visualization.plot_param_importances(study).show()

The output is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[I 2025-08-11 23:57:45,185] A new study created in memory with name: no-name-6daf7092-5fd4-4c82-9a2b-df2441c4d5c5
[I 2025-08-11 23:57:45,651] Trial 0 finished with value: 1.9995088890536956 and parameters: {'max_iter': 473, 'n_particles': 99, 'w': 0.14896526977487243, 'c1': 3.019473196926405, 'c2': 4.5818953347263935}. Best is trial 0 with value: 1.9995088890536956.
[I 2025-08-11 23:57:45,777] Trial 1 finished with value: 1.999414768461592 and parameters: {'max_iter': 150, 'n_particles': 86, 'w': 0.4271635960228849, 'c1': 4.801204080903146, 'c2': 2.2371053067612015}. Best is trial 0 with value: 1.9995088890536956.
[I 2025-08-11 23:57:46,133] Trial 2 finished with value: 1.9997511570852196 and parameters: {'max_iter': 721, 'n_particles': 49, 'w': 0.6291229499186842, 'c1': 2.013497488497775, 'c2': 4.2852584211941185}. Best is trial 2 with value: 1.9997511570852196.
[I 2025-08-11 23:57:46,298] Trial 3 finished with value: 2.0 and parameters: {'max_iter': 800, 'n_particles': 20, 'w': 0.3930876221567766, 'c1': 1.6655060141549023, 'c2': 2.463198757904443}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:46,521] Trial 4 finished with value: 2.0 and parameters: {'max_iter': 903, 'n_particles': 23, 'w': 0.5268748587214482, 'c1': 2.335768883377046, 'c2': 2.6385537928559897}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:47,037] Trial 5 finished with value: 1.9999035486915329 and parameters: {'max_iter': 735, 'n_particles': 71, 'w': 0.563337721198632, 'c1': 1.986545588694327, 'c2': 4.425718153574625}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:47,458] Trial 6 finished with value: 1.9999244216017873 and parameters: {'max_iter': 770, 'n_particles': 54, 'w': 0.47276207054655917, 'c1': 2.810493550227325, 'c2': 4.426827824661259}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:47,624] Trial 7 finished with value: 1.9999999979173264 and parameters: {'max_iter': 189, 'n_particles': 91, 'w': 0.3968695262628994, 'c1': 2.433116037725252, 'c2': 3.0499845254954727}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:48,206] Trial 8 finished with value: 1.9999432793205836 and parameters: {'max_iter': 694, 'n_particles': 85, 'w': 0.1530808791349413, 'c1': 3.9355466572193083, 'c2': 2.4538170649621995}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:48,844] Trial 9 finished with value: 2.0 and parameters: {'max_iter': 761, 'n_particles': 86, 'w': 0.7038412182520472, 'c1': 1.221286136785996, 'c2': 1.8683545356811075}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:48,902] Trial 10 finished with value: 1.996219564455837 and parameters: {'max_iter': 466, 'n_particles': 10, 'w': 0.8970011185079363, 'c1': 1.0122340614926895, 'c2': 1.3245288753769002}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:49,098] Trial 11 finished with value: 2.0 and parameters: {'max_iter': 958, 'n_particles': 18, 'w': 0.2872648819518356, 'c1': 1.7049152608787792, 'c2': 3.2749475545163786}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:49,428] Trial 12 finished with value: 1.9994788165013788 and parameters: {'max_iter': 959, 'n_particles': 29, 'w': 0.2988615790760505, 'c1': 3.2259062912230747, 'c2': 3.5552567141109686}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:49,756] Trial 13 finished with value: 2.0 and parameters: {'max_iter': 878, 'n_particles': 34, 'w': 0.6963457756204671, 'c1': 1.5704901816466865, 'c2': 2.6322485802415443}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:49,992] Trial 14 finished with value: 2.0 and parameters: {'max_iter': 598, 'n_particles': 36, 'w': 0.31572999738869745, 'c1': 2.308627019071979, 'c2': 1.6649803974508763}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:50,208] Trial 15 finished with value: 1.9990061787030773 and parameters: {'max_iter': 873, 'n_particles': 22, 'w': 0.5423644658018472, 'c1': 3.6494382789963744, 'c2': 3.8298146334545824}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:50,369] Trial 16 finished with value: 1.9988134920758407 and parameters: {'max_iter': 344, 'n_particles': 43, 'w': 0.8082008093534241, 'c1': 2.5726958294610993, 'c2': 2.791877472924242}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:50,504] Trial 17 finished with value: 2.0 and parameters: {'max_iter': 880, 'n_particles': 11, 'w': 0.37777323064045215, 'c1': 1.525731578204522, 'c2': 2.056855989666862}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:50,927] Trial 18 finished with value: 2.0 and parameters: {'max_iter': 599, 'n_particles': 66, 'w': 0.5077313244966785, 'c1': 2.0738459110485428, 'c2': 1.482837671226557}. Best is trial 3 with value: 2.0.
[I 2025-08-11 23:57:51,176] Trial 19 finished with value: 1.9998145878741809 and parameters: {'max_iter': 1000, 'n_particles': 22, 'w': 0.2311498402332406, 'c1': 3.412147760336736, 'c2': 3.833097567639764}. Best is trial 3 with value: 2.0.
Best Params: {'max_iter': 800, 'n_particles': 20, 'w': 0.3930876221567766, 'c1': 1.6655060141549023, 'c2': 2.463198757904443}
Best Value: 2.0

The parameter optimization process is as follows:

Optuna also provides an analysis of the importance of relevant parameters: