<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="de">
	<id>https://wiki.hshl.de/wiki/index.php?action=history&amp;feed=atom&amp;title=Zwicky-box_analysis</id>
	<title>Zwicky-box analysis - Versionsgeschichte</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.hshl.de/wiki/index.php?action=history&amp;feed=atom&amp;title=Zwicky-box_analysis"/>
	<link rel="alternate" type="text/html" href="https://wiki.hshl.de/wiki/index.php?title=Zwicky-box_analysis&amp;action=history"/>
	<updated>2026-04-14T23:43:24Z</updated>
	<subtitle>Versionsgeschichte dieser Seite in HSHL Mechatronik</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://wiki.hshl.de/wiki/index.php?title=Zwicky-box_analysis&amp;diff=148018&amp;oldid=prev</id>
		<title>Ajay.paul@stud.hshl.de: /* Zwicky-Box Analysis for Models */</title>
		<link rel="alternate" type="text/html" href="https://wiki.hshl.de/wiki/index.php?title=Zwicky-box_analysis&amp;diff=148018&amp;oldid=prev"/>
		<updated>2026-04-14T13:49:32Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Zwicky-Box Analysis for Models&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;de&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Nächstältere Version&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Version vom 14. April 2026, 13:49 Uhr&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l3&quot;&gt;Zeile 3:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Zeile 3:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;We do a Zwicky-box analysis of four CNN and Transformer models (ResNet-50, MobileNetV2, EfficientNet-B0, ViT-Base) across six image task. We look at what each model have (parameters, FLOPs, input size, how much data it need, speed, memory, hardware target, explainability, noise strongness, and test accuracy).  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;We do a Zwicky-box analysis of four CNN and Transformer models (ResNet-50, MobileNetV2, EfficientNet-B0, ViT-Base) across six image task. We look at what each model have (parameters, FLOPs, input size, how much data it need, speed, memory, hardware target, explainability, noise strongness, and test accuracy).  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;We &lt;/del&gt;make key parts (like model size, math cost, speed, memory size, data need, transfer, explain, noise handling, and segmentation output) with different levels. We put each model to these levels and test them for task like: fix image, make image better, remove noise, supervised segmentation, normal segmentation, and classification.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Here we &lt;/ins&gt;make key parts (like model size, math cost, speed, memory size, data need, transfer, explain, noise handling, and segmentation output) with different levels. We put each model to these levels and test them for task like: fix image, make image better, remove noise, supervised segmentation, normal segmentation, and classification.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;What we &lt;/del&gt;find&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;: &lt;/del&gt;CNN models (ResNet and EfficientNet) give good correct guess and pixel outputs but it cost more computer power. MobileNetV2 is very small (good for edge device) but guess less correct. ViT-Base is very big and need lots of data, it only do good on classification when it train with huge data.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;We &lt;/ins&gt;find &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;that, &lt;/ins&gt;CNN models (ResNet and EfficientNet) give good correct guess and pixel outputs but it cost more computer power. MobileNetV2 is very small (good for edge device) but guess less correct. ViT-Base is very big and need lots of data, it only do good on classification when it train with huge data.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;We show the &lt;/del&gt;morphological table, tell why we pick model for task, and give a compare table and metric chart.  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Below is a &lt;/ins&gt;morphological table, tell why we pick model for task, and give a compare table and metric chart.  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;What we suggest: &lt;/del&gt;For pixel task (fix, make better, or segmentation), CNNs (ResNet or EfficientNet) is best. For phone or fast real-time, use MobileNetV2 or small EfficientNet. For only classification with too much data, use ViT or big EfficientNet. MATLAB Deep Learning Toolbox have already trained models ready to use, like &amp;lt;tt&amp;gt;resnet50&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;mobilenetv2&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;efficientnetb0&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;visionTransformer&amp;lt;/tt&amp;gt;.&amp;lt;ref&amp;gt;MathWorks. &#039;&#039;Pretrained Deep Neural Networks&#039;&#039;. Available at: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;MathWorks. &#039;&#039;visionTransformer - Pretrained vision transformer (ViT) neural network&#039;&#039;. Available at: https://www.mathworks.com/help/vision/ref/visiontransformer.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For pixel task (fix, make better, or segmentation), CNNs (ResNet or EfficientNet) is best. For phone or fast real-time, use MobileNetV2 or small EfficientNet. For only classification with too much data, use ViT or big EfficientNet. MATLAB Deep Learning Toolbox have already trained models ready to use, like &amp;lt;tt&amp;gt;resnet50&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;mobilenetv2&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;efficientnetb0&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;visionTransformer&amp;lt;/tt&amp;gt;.&amp;lt;ref&amp;gt;MathWorks. &#039;&#039;Pretrained Deep Neural Networks&#039;&#039;. Available at: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;MathWorks. &#039;&#039;visionTransformer - Pretrained vision transformer (ViT) neural network&#039;&#039;. Available at: https://www.mathworks.com/help/vision/ref/visiontransformer.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== References ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== References ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;references /&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mtrwiki:diff:1.41:old-148017:rev-148018:php=table --&gt;
&lt;/table&gt;</summary>
		<author><name>Ajay.paul@stud.hshl.de</name></author>
	</entry>
	<entry>
		<id>https://wiki.hshl.de/wiki/index.php?title=Zwicky-box_analysis&amp;diff=148017&amp;oldid=prev</id>
		<title>Ajay.paul@stud.hshl.de: Die Seite wurde neu angelegt: „== Zwicky-Box Analysis for Models ==  We do a Zwicky-box analysis of four CNN and Transformer models (ResNet-50, MobileNetV2, EfficientNet-B0, ViT-Base) across six image task. We look at what each model have (parameters, FLOPs, input size, how much data it need, speed, memory, hardware target, explainability, noise strongness, and test accuracy).   We make key parts (like model size, math cost, speed, memory size, data need, transfer, explain, noise handl…“</title>
		<link rel="alternate" type="text/html" href="https://wiki.hshl.de/wiki/index.php?title=Zwicky-box_analysis&amp;diff=148017&amp;oldid=prev"/>
		<updated>2026-04-14T13:35:24Z</updated>

		<summary type="html">&lt;p&gt;Die Seite wurde neu angelegt: „== Zwicky-Box Analysis for Models ==  We do a Zwicky-box analysis of four CNN and Transformer models (ResNet-50, MobileNetV2, EfficientNet-B0, ViT-Base) across six image task. We look at what each model have (parameters, FLOPs, input size, how much data it need, speed, memory, hardware target, explainability, noise strongness, and test accuracy).   We make key parts (like model size, math cost, speed, memory size, data need, transfer, explain, noise handl…“&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Neue Seite&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== Zwicky-Box Analysis for Models ==&lt;br /&gt;
&lt;br /&gt;
We do a Zwicky-box analysis of four CNN and Transformer models (ResNet-50, MobileNetV2, EfficientNet-B0, ViT-Base) across six image task. We look at what each model have (parameters, FLOPs, input size, how much data it need, speed, memory, hardware target, explainability, noise strongness, and test accuracy). &lt;br /&gt;
&lt;br /&gt;
We make key parts (like model size, math cost, speed, memory size, data need, transfer, explain, noise handling, and segmentation output) with different levels. We put each model to these levels and test them for task like: fix image, make image better, remove noise, supervised segmentation, normal segmentation, and classification.&lt;br /&gt;
&lt;br /&gt;
What we find: CNN models (ResNet and EfficientNet) give good correct guess and pixel outputs but it cost more computer power. MobileNetV2 is very small (good for edge device) but guess less correct. ViT-Base is very big and need lots of data, it only do good on classification when it train with huge data.&lt;br /&gt;
&lt;br /&gt;
We show the morphological table, tell why we pick model for task, and give a compare table and metric chart. &lt;br /&gt;
&lt;br /&gt;
What we suggest: For pixel task (fix, make better, or segmentation), CNNs (ResNet or EfficientNet) is best. For phone or fast real-time, use MobileNetV2 or small EfficientNet. For only classification with too much data, use ViT or big EfficientNet. MATLAB Deep Learning Toolbox have already trained models ready to use, like &amp;lt;tt&amp;gt;resnet50&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;mobilenetv2&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;efficientnetb0&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;visionTransformer&amp;lt;/tt&amp;gt;.&amp;lt;ref&amp;gt;MathWorks. &amp;#039;&amp;#039;Pretrained Deep Neural Networks&amp;#039;&amp;#039;. Available at: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;MathWorks. &amp;#039;&amp;#039;visionTransformer - Pretrained vision transformer (ViT) neural network&amp;#039;&amp;#039;. Available at: https://www.mathworks.com/help/vision/ref/visiontransformer.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ajay.paul@stud.hshl.de</name></author>
	</entry>
</feed>