17.7 C
New York
Friday, September 30, 2022

Building Neural Networks With TensorFlow.NET – InfoQ.com

Learn the emerging software trends you should pay attention to. Attend online QCon Plus (Nov 29 – Dec 9, 2022). Register Now
Facilitating the Spread of Knowledge and Innovation in Professional Software Development


Avdi Grimm describes the future of development, which is already here. Get a tour of a devcontainer, and contrast it with a deployment container.
Susanne Kaiser is a software consultant working with teams on microservice adoption. Recently, she’s brought together Domain-Driven Design, Wardley Mapping, and Team Topologies into a conversation about helping teams adopt a fast flow of change. Today on the podcast, Wes Reisz speaks with Susanne about why she feels these three approaches to dealing with software complexity are so complementary.
APIs can tell you everything about your cloud infrastructure, but they’re hard to use and work in different ways. What if you could write simple SQL queries that call APIs for you and put results into a database? Steampipe, an open-source project that maps APIs to Postgres foreign tables, makes that dream come true. It’s hard enough to reason over data. Acquiring it should be easy, and now it is.
This article describes how staff+ engineers transition to supporters, enablers and force multipliers of others and what technical leadership looks like away from the management track. It explains the benefits for organisations to have leadership roles that are focussed on technical enablement and support.
The panelists discuss how to integrate security into DevOps, where their concerns are and how each is addressed.
Learn how to migrate an application to serverless and what are the common mistakes to avoid. Register Now.
Learn how cloud architectures help organizations take care of application and cloud security, observability, availability and elasticity. Register Now.
Understand the emerging software trends you should pay attention to. Attend in-person on Oct 24-28, 2022.
Make the right decisions by uncovering how senior software developers at early adopter companies are adopting emerging trends. Register Now.
InfoQ Homepage Articles Building Neural Networks With TensorFlow.NET
Jul 11, 2022 9 min read
by
Robert Krzaczyński
reviewed by
Jonathan Allen

Building and creating Neural Networks is mainly associated with such languages/environments as Python, R, or Matlab. However, there have been some new possibilities in the last few years.
Among others, within .NET technology. Of course, you can work on Neural Networks from scratch in every language, but it is not a part of the scope of this article. In my first words, I was thinking about libraries that give us such opportunities. In this article, I will focus on one of these libraries called TensorFlow.NET.
Let me explain the general concept of Neural Networks. We will centre on the Feedforward Neural Network (FNN), which is one of the basic types of neural networks. this type of Neural Network is also called multi-layer perceptrons (MLP).
Microsoft JDConf ’22: Cloud Native Java, JVM Configuration, GraalVM, Spring Boot, and More – Watch All Sessions On-Demand.
The objective of the Feedforward Neural Network is to approximate some function f*.
Neural Networks use classifiers, which are algorithms that map the input data to a specific category. 
For instance, for a classifier, y = f*(x) maps the input x to the category y. MLP determines the mapping y=f(x;α) and learns the parameter values α – exactly those that provide the best approximation of the function. 
Note the graphic below. It shows an MLP perceptron, which consists of one input layer, at least one hidden layer, and an output layer.
This is called a feedforward network because information always goes in one direction. It never goes backwards. It is worth mentioning that if a neural network contains two or more hidden layers, we call it the Deep Neural Network (DNN).

Artificial Neural Networks are a fundamental part of Deep Learning. They are mathematical models of biological neural networks based on the concept of artificial neurons. In artificial neural networks, an artificial neuron is treated as a computational unit that, based on a specific activation function, calculates at the output a certain value on the basis of the sum of the weighted input data.  They are scalable and comprehensive compared to other machine learning algorithms, which suit perfectly huge and complex Machine Learning tasks. Due to the specific features and unique advantages, the application area of neural networks is extensive.
We can apply them wherever not quantitative but qualitative answers are required. Quantitative data relates to any information that can be counted or measured, and to which a numerical value can be given. On the other hand, qualitative answers are descriptive in nature, expressed in language rather than numerical values. Therefore, Neural Networks include applications such as:
For better understanding, I will give you the following examples from real life. A certain company with business in 20 countries in the Central and Eastern European market distributes components for hydraulic and temperature control systems. The problem lies in determining the amount of stock available at each distribution point. The administrative staff has been collecting detailed data for several years by using reports on the quantity of demand and supply for a given type of product. This data shows that sometimes too much is stored, and sometimes there are shortages in supply. It was decided to use an artificial neural network to solve this problem. 
In order to create the inputs of the neural network, reports from 5 years of the stores' prosperity were used. This resulted in 206 cases prepared by office workers, of which 60% were used for the learning set, 20% for verification and the remaining 20% for testing. Each case contained certain features such as weather conditions in a particular region or locations of stores, i.e., whether they are in downtown or the suburbs.
The results were not sufficiently convincing to use the created algorithm for stock management (effectiveness within 65%), but the result was three times better and closer to the truth than “classical methods” based on basic calculations. It can be concluded that the author will make another attempt to create a system taking into account more factors in order to obtain a more reliable solution.
Artificial Neural Networks are used in various branches of science, economy, and industry. Since more and more entrepreneurs and specialists are noticing their effectiveness, the number of technologies that allow for their creation and implementation into commercial systems is also growing. There are many libraries implementing the technologies I mentioned before One of them is TensorFlow.NET, which I will introduce in a moment.
TensorFlow.NET is a library that provides a .NET Standard binding for TensorFlow. It allows .NET developers to design, train and implement machine learning algorithms, including neural networks. Tensorflow.NET also allows us to leverage various machine learning models and access the programming resources offered by TensorFlow.
TensorFlow is an open-source framework developed by Google scientists and engineers for numerical computing. It is composed by a set of tools for designing, training and fine-tuning neural networks.TensorFlow's flexible architecture makes it possible to deploy calculations on one or more processors (CPUs) or graphics cards (GPUs) on a personal computer, server, without re-writing code.
Keras is another open-source library for creating neural networks. It uses TensorFlow or Theano as a backend where operations are performed. Keras aims to simplify the use of these two frameworks, where algorithms are executed and results are returned to us. We will also use Keras in our example below.
In this example, we will create FNN and use some terms related to Neural Networs such as layers, the loss function etc. I recommend checking out this article written by Matthew Stewart to understand all these terms.
First, you need to create a console application project (1) and download the necessary libraries from NuGet Packages (2).

(1)

(2)
At this point, you can start to implement and create the model. The first step is to create a class corresponding to your neural network. It should include the fields of the model, learning and test set. The Model class comes from TensorFlow.Keras.Engine, and NDArray is just part of NumSharp, which is the corresponding NumPy library known in the Python world.
The second step is to generate test and training sets. For this purpose, we will use the MNIST datasets from Keras. MNIST is a massive database of digits that is used to train various image processing algorithms. This dataset is loaded from the Keras library. The training images measure 28* 28 pixels and there are 60,000 of them. We need to reshape them into a single row of 784 pixels (28*28 pixels) and scale them from the 0-255 range to the 0-1 range, because we need to normalize the inputs for our neural network. As for the test images, they are almost the same, except there are 10,000 of them. 
Now we can focus on the code responsible for building the model and configuring the options for the neural network. Here is where we can define the layers and their activation functions, the optimizer, the loss function, and the metrics. The general concept of neural networks is easily explained there. So the implementation of our neural network can be seen below.
In this example, we have the shape set up to 784 (because we have a single row of 784 pixels), an input layer with output space dimensions equal to 64, and an output layer with 10 units (you can set a different number of layers here, choose it by trial and error). The activation function is ReLU and Adam's algorithm is applied as an optimizer. In general, that has been designed specifically for training deep neural networks. Additionally, accuracy will be our metric for checking the quality of learning. I think this is a good time to explain what accuracy and loss function mean.
Loss function in a neural network defines the difference between the expected result and the result produced by the machine learning model. From the loss function, we can derive gradients, which are used to update the weights. The average of all the losses represents the cost.
Accuracy is the number of correctly predicted classes divided by the total number of instances tested. It is used to determine how many instances have been correctly classified. The accuracy score is sought to be as high as possible. In our case, the accuracy is over 90%. Generally, the results seem very good, but our analysis is not complex. It would require more specific studies. Having completed the previous steps, you can now move on to training and testing the model:
Set the batch size value, which represents the size of a subset of the training sample, for example, to 8, and epochs to 2. Here, we also select these values in the form of an experimental process. Then proceed finally to create an instance of the Fnn class and execute the code.
Once the application has started, the training phase should begin. In the console, you should see something similar to this:

After a while, depending on how large your dataset is, it should proceed to the testing phase:

As you can see, a loss function and an accuracy are returned in each iteration. The obtained results of the mentioned parameters indicate that the neural network we created in this example works very well.
In this article, I wanted to focus on showing how a neural network can be designed. Of course, to leverage this neural network-based algorithm in TensorFlow.NET, you do not need to know the theory behind it. Nonetheless, I suppose this familiarity with the basics allows for a better understanding of the issues and the results obtained. Until a few years ago, Machine Learning was only associated with programming languages like Python or R. Thanks to libraries like the one discussed here, C# is also starting to play a significant role. I hope it will continue in this direction.

Becoming an editor for InfoQ was one of the best decisions of my career. It has challenged me and helped me grow in so many ways. We’d love to have more people join our team.

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.
You need to Register an InfoQ account or or login to post comments. But there’s so much more behind being registered.
Get the most out of the InfoQ experience.
Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

A round-up of last week’s content on InfoQ sent out every Tuesday. Join a community of over 250,000 senior developers. View an example

We protect your privacy.
Real-world technical talks. No product pitches.
Practical ideas to inspire you and your team.
QCon San Francisco – Oct 24-28, In-person.

QCon San Francisco brings together the world’s most innovative senior software engineers across multiple domains to share their real-world implementation of emerging trends and practices.
Uncover emerging software trends and practices to solve your complex engineering challenges, without the product pitches.Save your spot now
InfoQ.com and all content copyright © 2006-2022 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we’ve ever worked with.
Privacy Notice, Terms And Conditions, Cookie Policy

source

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles