Today, when the post-covid world enters the third decade of the 21-st century, global development more than ever is bound to the ability of creating new materials; and at the same time material science becomes the key connector between knowledge and development. It applies to nearly any industry – from the tiniest electronic appliances, through medical or military applications, up until the aviation and space exploration – ability to develop, create and apply a specific material marks the border between business success or failure.
Material science roots back to the Bronze Age, however systematic research was duly described in the 16th century – for centuries „De re metallica libri XII“ by Georgius Agricola was considered a metallurgy bible. Up until very recently, material science and metallurgy were solely based on a empirical research. This has changed with the ability to use numerical methods for theoretical modeling.
As information technology found its place in biology, astronomy, finances, medicine and business, hence material science was a natural next pit-stop for the IT revolution.
Development and creation of the new materials with specific properties, high resistance or strength is very costly, especially research and test phase. The ability to predict the properties before the material is even synthesized gives a huge technological leap and allows optimization of materials, element production process as well as plan the recycling measures. Sounds like a science-fiction story, does it not? Well, not really….
Majority of the materials of today do not occur in their form naturally in the environment. Metals are being refined and mixed with alloying elements to influence their mechanical or chemical characteristics.
Forecasting mechanical properties of a new alloy is an important aspect for both scientists and engineers as it allows saving time and money. Relation between chemical and mechanical attributes of a given alloy is very complex and difficult. So difficult, that the common numerical methods may be too short to process it. That is why we have witnessed a steep increase in the use of neural networks.
Architecture of neural networks
Neural networks model a human brain, imitating the way the problem is solved in our heads. They allow processing certain tasks faster than the fastest digital computer. Our brains have tens of billions of individual structural nodes (neurons), which are interconnected to form a complicated network. Learning process is based on adjusting and optimizing synaptic connections between each node. It is quite similar to Artificial Neural Network (ANN), which uses weighted sum of each connection and with that, stimulating each neuron. A neuron with the bigger weight (higher importance) has more influence on the next layer of neurons. Each layer influences the next one, passing the processed information deep down the network until the final layer with output neurons, which reads the result.
Activation of each neuron is done by activation function. Depending on what data is being analyzed, there is a wide spectrum of functions to choose from, for example sigmoid functions, very popular in the early days, allows fast propagation, giving up the precision.
To summarize, well configured and trained network can easily recognize shapes or group data in no time – a task extremely difficult for computers using classical algorithms.
How do neural networks work
Neural networks use the distributed parallel processing of information, which means that information recording, processing and transferring are carried out by the whole network system. Learning process is a natural and integral function of each neural network – as the information within the network is stored by the strength of synaptic connections of each node. Connections leading to correct answers are strengthen, while these heading towards wrong answers are being denominated. All of this happens iteratively upon presenting the data and analysis of the correct answers. Such data is called training dataset. By analogy, testing (validation) dataset is used to evaluate how well the network has been trained.
Network modelling and simulation composes of five main rounds:
- data collection,
- initial data processing,
- learning process,
- predictive simulation of the trained and tested network (this is where the magic happens, where we get the answers).
Optimal chemical composition of the alloy and its properties
Generally speaking, neural networks are the best tools for analysis of data with mutual, unknown relations. For example, picture a neural network aiming towards finding the ultimate relation between the amount of alloying elements in aluminum and its properties. Each element melted into the alloy alters its characteristics, such as electrical or thermal conductivity, melting point or strength. Strength is an important indicator, as with it, one can define other properties – endurance, plasticity or abrasion.
In the above example, initial dataset must include amount of each alloying element using percentage values and empirically confirmed strength of such alloy. Such data provides real and true values, which is called target. Next, the entire data set needs to be divided into training and testing datasets, usually in 75-25% proportion, however this ratio is usually bound to the amount of available data. Large data can allow broadening the testing dataset to better the precision of learning process.
Before the first iteration, the network randomly sets weights across the entire system, generating some result (far from the anticipated result though). Then, the network learns – adjusts these weights and using error function, sets summed error of the network. The easiest and most popular interpolation approach is a method of least squares, which the network uses to adjust (strengthen or weaken) weights. This process is called back propagation (backprop) and it is an essential element allowing training of the network.
After that, the data is let through the network again, and again – the results are checked and compared to the expected results (hence “supervised learning”), the error is being checked and again, through backprop, weights are adjusted. This goes in circle, where one iteration is called an epoch. Learning process ends upon specific conditions are met – after a specific number of epochs (iterations); when error reaches a specified level or when the error level doesn’t change after a defined number of epochs.
At this point, it is important to point out the problem related with the training process – the error is not minimized for an entire data, but only to a testing dataset, which can lead to an overfitting – an occurrence related with the fact that the network is unable to generalize well enough. In general, the network is able to learn the data from the provided training set quite well, but it may have problems with generalization of other data. Providing large enough sets of data should help avoiding such issues.
As the result, trained neural network can predict what amounts and which element to use to create an alloy with specifically desired properties – without the actual hard work – expensive, time consuming and pretty much random. All with data we already have.
Welding and neural network
Another interesting way of using neural network is welding – or more specifically, checking and approving (or rejecting) the quality of the weld. Data is collected by sensors in the welding head and passed to the properly trained network for evaluation. The model checks the quality by checking for potential internal defects, distance between the welds at the same time, supervising the process. This article describes the process on industrial scale.
The fact is, that the concept of neural networks is not just a fancy, one-season trend. They become more and more popular across a growing number of applications and one can hope that in a foreseeable future, they become a standard rather than an experiment. One day Artificial Intelligence, contrary to what we are being told by Hollywood and sci-fi writers, will become indispensable part of each branch of science and engineering.
Can metallurgy be considered a sustainable environmental technology? This is the question that bothers not only environmental activists, but anybody concerned with the mining industry having a considerable impact on the natural environment.
What is corrosion? From the chemical point of view, it is a natural, electrochemical process between the surface and the environment, converting a refined metal into a more stable form of oxides, hydroxides or sulfides.
Do you share our passion for metals and want to broaden your knowledge on various mill products? Here are some interesting facts about metals