Posts YUKI_ANN
Post
Cancel

YUKI_ANN

📂 Download MATLAB Code (Zip)


Metaheuristic-based neural Network Training❓

Meta-heuristic optimization algorithms are popular and effective methods for solving complex optimization problems. They can help improve the performance of neural networks by optimizing their parameters:

  • Can handle high-dimensional and non-linear optimization problems: Neural network training involves finding optimal values for a large number of parameters, which can be a high-dimensional and non-linear optimization problem. Meta-heuristic optimization algorithms are capable of handling such problems and can explore a wide range of possible solutions to find the best one.

  • Do not require gradient information: Unlike traditional optimization algorithms, meta-heuristic optimization algorithms do not require gradient information to optimize the parameters of neural networks. This makes them particularly useful in cases where the gradient information is difficult or expensive to obtain.

  • Avoid local optima: Neural network training involves finding the global optimum of the objective function, which can be difficult due to the presence of local optima. Meta-heuristic optimization algorithms use a stochastic search strategy that can help avoid getting stuck in local optima and find the global optimum.

  • Can be easily parallelized: Meta-heuristic optimization algorithms are inherently parallelizable, which means that they can take advantage of parallel computing resources to speed up the optimization process. This is particularly useful when training large neural networks that require a lot of computational resources.

  • Can handle noisy or incomplete data: In real-world applications, data can often be noisy or incomplete, which can make it difficult to train neural networks using traditional optimization methods. Meta-heuristic optimization algorithms are robust to such noise and can still find good solutions even with incomplete or noisy data.

They also have disadvantages when used in neural network training:

  • Slow convergence: Meta-heuristic algorithms are designed to explore a large search space, which can make them slower to converge to an optimal solution compared to gradient-based methods. This slow convergence can lead to longer training times and make the algorithms less suitable for real-time applications.

  • High computational cost: Meta-heuristic algorithms are computationally expensive, requiring a large number of iterations and evaluations of the objective function. This can be a significant disadvantage in large-scale neural network training, where the cost of each iteration can be prohibitive.

YUKI-ANN

The code presented here implements a YUKI algorithm training of the feedforward neural network with multiple hidden layers,

First loads the model parameters and equivalent data from text files. Then, it creates a feedforward neural network with multiple layers and variable neurons quantity.

Next, there are two parts of training the network using two different methods. In the first part, the network is trained with the backpropagation algorithm (default option of the train function) using gradient descent. The input and output data are Model_Data and Model_Parameters, respectively.

In the second part, a modified version of the particle swarm optimization algorithm, called YUKI, is employed to optimize the weights of the neural network. A random initial population of solutions is generated, and a local search area is defined around the current best solution. Then, for each solution, a new position is generated by adding a weighted combination of two vectors. The first vector points towards the historical best position of the solution, while the second vector points towards the current best solution’s position. The weights of these two vectors are defined by a constant EXP, which controls the exploration-exploitation trade-off.

After generating new solutions, their weights are updated in the neural network, and the corresponding error is calculated. Then, the best positions and errors for each solution and the global best position and error are updated.

The algorithm continues until either the maximum number of iterations is reached or the error is below a predefined tolerance level. Finally, the trained neural network is used to make predictions on a new set of input data, and the training results are plotted, including the mean squared error and the regression results.


📑 Cite as

Deep Neural Network and YUKI Algorithm for Inner Damage Characterization Based on Elastic Boundary Displacement. Lecture Notes in Civil Engineering. 2023. https://doi.org/10.1007/978-3-031-24041-6_18

    BibTeX     |     EndNote     |     RefMan / Mendeley    

Damage assessment in laminated composite plates using Modal Strain Energy and YUKI-ANN algorithm. Composite Structures. 2023. https://doi.org/10.1016/j.compstruct.2022.116272

    BibTeX     |     EndNote     |     RefMan / Mendeley    


This post is licensed under CC BY 4.0 by the author.

Contents

Trending Tags