<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="de">
	<id>https://wiki.hshl.de/wiki/index.php?action=history&amp;feed=atom&amp;title=PageName</id>
	<title>PageName - Versionsgeschichte</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.hshl.de/wiki/index.php?action=history&amp;feed=atom&amp;title=PageName"/>
	<link rel="alternate" type="text/html" href="https://wiki.hshl.de/wiki/index.php?title=PageName&amp;action=history"/>
	<updated>2026-04-15T04:59:46Z</updated>
	<subtitle>Versionsgeschichte dieser Seite in HSHL Mechatronik</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://wiki.hshl.de/wiki/index.php?title=PageName&amp;diff=146901&amp;oldid=prev</id>
		<title>Ajay.paul@stud.hshl.de: Die Seite wurde neu angelegt: „= Convolutional Neural Network for Image Classification =  A &#039;&#039;&#039;Convolutional Neural Network&#039;&#039;&#039; (CNN) is a class of deep neural networks, most commonly applied to analyzing visual imagery. This article describes a specific implementation of a CNN using the TensorFlow and Keras libraries to classify images from the &#039;&#039;&#039;CIFAR-10&#039;&#039;&#039; dataset.  == Overview == The model described herein is designed to classify low-resolution color images ($32 \times 32$…“</title>
		<link rel="alternate" type="text/html" href="https://wiki.hshl.de/wiki/index.php?title=PageName&amp;diff=146901&amp;oldid=prev"/>
		<updated>2026-02-06T07:29:51Z</updated>

		<summary type="html">&lt;p&gt;Die Seite wurde neu angelegt: „= Convolutional Neural Network for Image Classification =  A &amp;#039;&amp;#039;&amp;#039;Convolutional Neural Network&amp;#039;&amp;#039;&amp;#039; (CNN) is a class of deep neural networks, most commonly applied to analyzing visual imagery. This article describes a specific implementation of a CNN using the &lt;a href=&quot;/wiki/index.php?title=TensorFlow&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;TensorFlow (Seite nicht vorhanden)&quot;&gt;TensorFlow&lt;/a&gt; and &lt;a href=&quot;/wiki/index.php?title=Keras&amp;amp;action=edit&amp;amp;redlink=1&quot; class=&quot;new&quot; title=&quot;Keras (Seite nicht vorhanden)&quot;&gt;Keras&lt;/a&gt; libraries to classify images from the &amp;#039;&amp;#039;&amp;#039;CIFAR-10&amp;#039;&amp;#039;&amp;#039; dataset.  == Overview == The model described herein is designed to classify low-resolution color images ($32 \times 32$…“&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Neue Seite&lt;/b&gt;&lt;/p&gt;&lt;div&gt;= Convolutional Neural Network for Image Classification =&lt;br /&gt;
&lt;br /&gt;
A &amp;#039;&amp;#039;&amp;#039;Convolutional Neural Network&amp;#039;&amp;#039;&amp;#039; (CNN) is a class of deep neural networks, most commonly applied to analyzing visual imagery. This article describes a specific implementation of a CNN using the [[TensorFlow]] and [[Keras]] libraries to classify images from the &amp;#039;&amp;#039;&amp;#039;CIFAR-10&amp;#039;&amp;#039;&amp;#039; dataset.&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
The model described herein is designed to classify low-resolution color images ($32 \times 32$ pixels) into one of ten distinct classes (e.g., airplanes, automobiles, birds, cats). The implementation utilizes a sequential architecture consisting of a convolutional base for feature extraction followed by a dense network for classification.&lt;br /&gt;
&lt;br /&gt;
== Dataset ==&lt;br /&gt;
The system is trained on the &amp;#039;&amp;#039;&amp;#039;CIFAR-10&amp;#039;&amp;#039;&amp;#039; dataset, which consists of 60,000 $32 \times 32$ color images in 10 classes, with 6,000 images per class.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Training set:&amp;#039;&amp;#039;&amp;#039; 50,000 images&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Test set:&amp;#039;&amp;#039;&amp;#039; 10,000 images&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Preprocessing:&amp;#039;&amp;#039;&amp;#039; Pixel values are normalized to the range [0, 1] by dividing by 255.0 to accelerate convergence during gradient descent.&lt;br /&gt;
&lt;br /&gt;
[[File:Dataset_Example.png|thumb|center|Dataset example used for CNN image classification]]&lt;br /&gt;
&lt;br /&gt;
== Network Architecture ==&lt;br /&gt;
The architecture follows a sequential pattern: &amp;lt;tt&amp;gt;Conv2D&amp;lt;/tt&amp;gt; $\rightarrow$ &amp;lt;tt&amp;gt;MaxPooling&amp;lt;/tt&amp;gt; $\rightarrow$ &amp;lt;tt&amp;gt;Dense&amp;lt;/tt&amp;gt;. The specific layer configuration and parameter counts are detailed below:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Model: &amp;quot;sequential&amp;quot;&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center; width:60%;&amp;quot;&lt;br /&gt;
! style=&amp;quot;text-align:left;&amp;quot; | Layer (type) !! Output Shape !! Param #&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | conv2d (Conv2D) || (None, 30, 30, 32) || 896&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | max_pooling2d (MaxPooling2D) || (None, 15, 15, 32) || 0&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | conv2d_1 (Conv2D) || (None, 13, 13, 64) || 18,496&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | max_pooling2d_1 (MaxPooling2D) || (None, 6, 6, 64) || 0&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | conv2d_2 (Conv2D) || (None, 4, 4, 64) || 36,928&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | flatten (Flatten) || (None, 1024) || 0&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | dense (Dense) || (None, 64) || 65,600&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;text-align:left;&amp;quot; | dense_1 (Dense) || (None, 10) || 650&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; background-color:#eaecf0; text-align:left;&amp;quot; colspan=&amp;quot;3&amp;quot; | Total params: 122,570 (478.79 KB)&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; background-color:#eaecf0; text-align:left;&amp;quot; colspan=&amp;quot;3&amp;quot; | Trainable params: 122,570 (478.79 KB)&lt;br /&gt;
|-&lt;br /&gt;
| style=&amp;quot;font-weight:bold; background-color:#eaecf0; text-align:left;&amp;quot; colspan=&amp;quot;3&amp;quot; | Non-trainable params: 0 (0.00 B)&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Training Configuration ===&lt;br /&gt;
The model is compiled with the following hyperparameters:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Optimizer:&amp;#039;&amp;#039;&amp;#039; Adam (Adaptive Moment Estimation).&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Loss Function:&amp;#039;&amp;#039;&amp;#039; Sparse Categorical Crossentropy (&amp;lt;tt&amp;gt;from_logits=True&amp;lt;/tt&amp;gt;).&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Metrics:&amp;#039;&amp;#039;&amp;#039; Accuracy.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Epochs:&amp;#039;&amp;#039;&amp;#039; 10 iterations over the entire dataset.&lt;br /&gt;
&lt;br /&gt;
=== Performance ===&lt;br /&gt;
Upon training for 10 epochs, the model typically achieves:&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Training Accuracy:&amp;#039;&amp;#039;&amp;#039; High (variable based on initialization).&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Test Accuracy:&amp;#039;&amp;#039;&amp;#039; Approximately 70% – 75%.&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Overfitting:&amp;#039;&amp;#039;&amp;#039; A divergence between training accuracy and validation accuracy suggests the model may memorize training data. Techniques such as Dropout or Data Augmentation are recommended to mitigate this.&lt;br /&gt;
&lt;br /&gt;
=== Result ===&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;border:none;&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| [[Datei:Calssification.png|mini]]&lt;br /&gt;
| [[Datei:Classification new data.png|mini]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Inference on Custom Images ===&lt;br /&gt;
To test the model on high-quality external images, the input must be resized to match the network&amp;#039;s input constraints ($32 \times 32$ pixels).&lt;br /&gt;
&amp;lt;source lang=&amp;quot;python&amp;quot;&amp;gt;&lt;br /&gt;
img = image.load_img(path, target_size=(32, 32))&lt;br /&gt;
img_array = image.img_to_array(img) / 255.0&lt;br /&gt;
predictions = model.predict(img_array)&lt;br /&gt;
&amp;lt;/source&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ajay.paul@stud.hshl.de</name></author>
	</entry>
</feed>