Publikationen (ohne Studienabschlussarbeiten)

Anzahl der Treffer: 536
Erstellt: Sun, 30 Jun 2024 16:57:47 +0200 in 0.1140 sec


Zhang, Chen;
Multispektrale und dreidimensionale Bildgebung im Nahbereich. - Ilmenau : Universitätsbibliothek, 2024. - 1 Online-Ressource (XIV, 212 Seiten)
Technische Universität Ilmenau, Dissertation 2024

Die vorliegende Arbeit zielt auf eine Kombination der bildgebenden 3D-Technik mit der multispektralen Bilderfassung. Diese Technologiefusion kann in zwei Varianten unterteilt werden: 1. Multispektral-3D-Bildgebung: Pixelgetreue Erfassung der 3D-Form der Oberfläche und deren spektral aufgelösten optischen Eigenschaften. 2. 3D-Bildgewinnung aus multispektralen 2D-Bilddaten: Rekonstruktion der 3D-Form der Oberfläche aus multispektralen 2D-Bilddaten. Im Bereich der Multispektral-3D-Bildgebung werden zwei Kamerasysteme vorgestellt. Ein System besteht aus zwei Filterradkameras und einem digitalen Musterprojektor. Es ermöglicht eine hohe Anzahl von Spektralkanälen und eine sehr präzise 3D-Erfassung. Zwei industrielle Anwendungsbeispiele verdeutlichen die Vorteile der Multispektral-3D-Bildgebung, insbesondere die zuverlässigere Analyse multispektraler Informationen mithilfe von 3D-Daten. Des Weiteren wird ein Echtzeit-Mehrkamerasystem mit einem GOBO-Projektor vorgestellt. Die GOBO-Projektion ermöglicht eine präzise 3D-Erfassung in Echtzeit und gewährleistet eine sichere Verknüpfung verschiedener 2D-Kameras. Mittels des entwickelten Kalibrierverfahrens werden Kameras mit unterschiedlichen Bildmodalitäten integriert, was eine multimodale 3D-Bildgebung realisiert. Das Potenzial der multimodalen 3D-Bildgebung wird anhand eines Anwendungsbeispiels der Vitalparameterschätzung demonstriert. Im Bereich der 3D-Bildgewinnung aus multispektralen 2D-Bilddaten werden zunächst die Ergebnisse von Untersuchungen zur Abhängigkeit der optischen 3D-Messung mittels sequenzieller Musterprojektion von der Lichtwellenlänge präsentiert. Es wird gezeigt, dass die 3D-Messung bei opaken und diffusen Oberflächen unabhängig von der Wellenlänge des Lichtes ist. Hingegen wird eine deutliche Wellenlängenabhängigkeit bei transluzenten und konkaven glänzenden Oberflächen nachgewiesen. Anschließend wird ein Ansatz zur Snapshot-3D-Bildaufnahme bei opaken und diffusen Oberflächen vorgestellt. Dabei werden gleichzeitig verschiedene Lichtmuster bei unterschiedlichen Wellenlängen mit einem Multi-Wellenlängen-Musterprojektor projiziert und mit einer einzigen 2D-Aufnahme von zwei Snapshot-Multispektralkameras erfasst. Die 3D-Rekonstruktion erfolgt durch die Analyse dieser spektralen Musterfolge. Experimentelle Evaluationen liefern einen Konzeptnachweis für den vorgeschlagenen 3D-Ansatz.



https://doi.org/10.22032/dbt.60734
Bräuer-Burchardt, Christian; Munkelt, Christoph; Bleier, Michael; Baumann, Anja; Heinze, Matthias; Gebhart, Ingo; Kühmstedt, Peter; Notni, Gunther
Deepwater 3D measurements with a novel sensor system. - In: Applied Sciences, ISSN 2076-3417, Bd. 14 (2024), 2, 557, S. 1-17

A novel 3D sensor system for underwater application is presented, primarily designed to carry out inspections on industrial facilities such as piping systems, offshore wind farm foundations, anchor chains, and other structures at deep depths of up to 1000 m. The 3D sensor system enables high-resolution 3D capture at a measuring volume of approximately 1 m3, as well as the simultaneous capture of color data using active stereo scanning with structured lighting, producing highly accurate and detailed 3D images for close-range inspection. Furthermore, the system uses visual inertial odometry to map the seafloor and create a rough 3D overall model of the environment via Simultaneous Localization and Mapping (SLAM). For this reason, the system is also suitable for geological, biological, or archaeological applications in underwater areas. This article describes the overall system and data processing, as well as initial results regarding the measurement accuracy and applicability from tests of the sensor system in a water basin and offshore with a Remotely Operating Vehicle (ROV) in the Baltic Sea.



https://doi.org/10.3390/app14020557
Ramm, Roland; de Dios Cruz, Pedro; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther
Fusion of multimodal imaging and 3D digitization using photogrammetry. - In: Sensors, ISSN 1424-8220, Bd. 24 (2024), 7, 2290, S. 1-20

Multimodal sensors capture and integrate diverse characteristics of a scene to maximize information gain. In optics, this may involve capturing intensity in specific spectra or polarization states to determine factors such as material properties or an individual’s health conditions. Combining multimodal camera data with shape data from 3D sensors is a challenging issue. Multimodal cameras, e.g., hyperspectral cameras, or cameras outside the visible light spectrum, e.g., thermal cameras, lack strongly in terms of resolution and image quality compared with state-of-the-art photo cameras. In this article, a new method is demonstrated to superimpose multimodal image data onto a 3D model created by multi-view photogrammetry. While a high-resolution photo camera captures a set of images from varying view angles to reconstruct a detailed 3D model of the scene, low-resolution multimodal camera(s) simultaneously record the scene. All cameras are pre-calibrated and rigidly mounted on a rig, i.e., their imaging properties and relative positions are known. The method was realized in a laboratory setup consisting of a professional photo camera, a thermal camera, and a 12-channel multispectral camera. In our experiments, an accuracy better than one pixel was achieved for the data fusion using multimodal superimposition. Finally, application examples of multimodal 3D digitization are demonstrated, and further steps to system realization are discussed.



https://doi.org/10.3390/s24072290
Wunsch, Lennard; Görner Tenorio, Christian; Anding, Katharina; Golomoz, Andrei; Notni, Gunther
Data fusion of RGB and depth data with image enhancement. - In: Journal of imaging, ISSN 2313-433X, Bd. 10 (2024), 3, 73, S. 1-17

Since 3D sensors became popular, imaged depth data are easier to obtain in the consumer sector. In applications such as defect localization on industrial objects or mass/volume estimation, precise depth data is important and, thus, benefits from the usage of multiple information sources. However, a combination of RGB images and depth images can not only improve our understanding of objects, capacitating one to gain more information about objects but also enhance data quality. Combining different camera systems using data fusion can enable higher quality data since disadvantages can be compensated. Data fusion itself consists of data preparation and data registration. A challenge in data fusion is the different resolutions of sensors. Therefore, up- and downsampling algorithms are needed. This paper compares multiple up- and downsampling methods, such as different direct interpolation methods, joint bilateral upsampling (JBU), and Markov random fields (MRFs), in terms of their potential to create RGB-D images and improve the quality of depth information. In contrast to the literature in which imaging systems are adjusted to acquire the data of the same section simultaneously, the laboratory setup in this study was based on conveyor-based optical sorting processes, and therefore, the data were acquired at different time periods and different spatial locations. Data assignment and data cropping were necessary. In order to evaluate the results, root mean square error (RMSE), signal-to-noise ratio (SNR), correlation (CORR), universal quality index (UQI), and the contour offset are monitored. With JBU outperforming the other upsampling methods, achieving a meanRMSE = 25.22, mean SNR = 32.80, mean CORR = 0.99, and mean UQI = 0.97.



https://doi.org/10.3390/jimaging10030073
Walther, Dominik; Junger, Christina; Schmidt, Leander; Schricker, Klaus; Notni, Gunther; Bergmann, Jean Pierre; Mäder, Patrick
Recurrent autoencoder for weld discontinuity prediction. - In: Journal of advanced joining processes, ISSN 2666-3309, Bd. 9 (2024), 100203, S. 1-12

Laser beam butt welding is often the technique of choice for a wide range of industrial tasks. To achieve high quality welds, manufacturers often rely on heavy and expensive clamping systems to limit the sheet movement during the welding process, which can affect quality. Jiggless welding offers a cost-effective and highly flexible alternative to common clamping systems. In laser butt welding, the process-induced joint gap has to be monitored in order to counteract the effect by means of an active position control of the sheet metal. Various studies have shown that sheet metal displacement can be detected using inductive probes, allowing the prediction of weld quality by ML-based data analysis. The probes are dependent on the sheet metal geometry and are limited in their applicability to complex geometric structures. Camera systems such as long-wave infrared (LWIR) cameras can instead be mounted directly behind the laser to overcome a geometry dependent limitation of the jiggles system. In this study we will propose a deep learning approach that utilizes LWIR camera recordings to predict the remaining welding process to enable an early detection of weld interruptions. Our approach reaches 93.33% accuracy for time-wise prediction of the point of failure during the weld.



https://doi.org/10.1016/j.jajp.2024.100203
Schraml, Dominik; Notni, Gunther
Synthetic training data in AI-driven quality inspection: the significance of camera, lighting, and noise parameters. - In: Sensors, ISSN 1424-8220, Bd. 24 (2024), 2, 649, S. 1-18

Industrial-quality inspections, particularly those leveraging AI, require significant amounts of training data. In fields like injection molding, producing a multitude of defective parts for such data poses environmental and financial challenges. Synthetic training data emerge as a potential solution to address these concerns. Although the creation of realistic synthetic 2D images from 3D models of injection-molded parts involves numerous rendering parameters, the current literature on the generation and application of synthetic data in industrial-quality inspection scarcely addresses the impact of these parameters on AI efficacy. In this study, we delve into some of these key parameters, such as camera position, lighting, and computational noise, to gauge their effect on AI performance. By utilizing Blender software, we procedurally introduced the “flash” defect on a 3D model sourced from a CAD file of an injection-molded part. Subsequently, with Blender’s Cycles rendering engine, we produced datasets for each parameter variation. These datasets were then used to train a pre-trained EfficientNet-V2 for the binary classification of the “flash” defect. Our results indicate that while noise is less critical, using a range of noise levels in training can benefit model adaptability and efficiency. Variability in camera positioning and lighting conditions was found to be more significant, enhancing model performance even when real-world conditions mirror the controlled synthetic environment. These findings suggest that incorporating diverse lighting and camera dynamics is beneficial for AI applications, regardless of the consistency in real-world operational settings.



https://doi.org/10.3390/s24020649
Linß, Gerhard; Linß, Elske; Notni, Gunther; Rosenberger, Maik; Greiner, Philipp; Illhardt, Sebastian; Kühn, Olaf; Hofmann, Dietrich; Höppner, Dominik; Szymkiewicz, Jennifer
Qualitätsmanagement-Grundlagen : Aufbau und Zertifizierung von Managementsystemen, Metrologie, Messtechnik
5., vollständig überarbeitete Auflage. - München : Hanser, 2024. - 1 Online-Ressource (XV, 368 Seiten). - (Hanser eLibrary) ISBN 978-3-446-47695-0

Umfassendes praxisorientiertes Lehr- und Arbeitsbuch Dieses Lehr- und Arbeitsbuch vermittelt das Grundwissen zum Qualitätsmanagement (QM) und stellt Zusammenhänge zu anderen Wissensgebieten, insbesondere zur Messtechnik und Metrologie her. Kapitel zu Normen für das QM, Qualitätsregelkreisen, Struktur und Aufbau integrierter QM-Systeme, Prozessmanagement, staatlich-metrologischer Infrastruktur und zur Einführung und Zertifizierung von QM-Systemen runden die Darstellungen ab. - Berücksichtigt die aktuelle Normenfamilie ISO 9000 ff. - Anleitung zum Aufbau und zur Pflege von QM-Dokumentationen - Umsetzungsorientiert und kompakt - Sowohl im Studium als auch in der Praxis einsetzbar - Zum Download: Umfangreiches Paket mit praktischen Arbeitshilfen



https://doi.org/10.3139/9783446476950
Wunsch, Lennard; Anding, Katharina; Polte, Galina; Liu, Kun; Notni, Gunther
Data augmentation for solving industrial recognition tasks with underrepresented defect classes. - In: Acta IMEKO, ISSN 2221-870X, Bd. 12 (2023), 4, S. 1-5

This paper discusses neural network-based data augmentation to increase the performance of neural networks in classification of datasets with underrepresented defect classes. The performance of deep neural networks suffers from an inhomogeneous class distribution in recognition tasks. In particular, applications of deep neural networks to solve quality assurance tasks in industrial production suffer from such unbalanced class distributions. In order to train deep learning networks, a large amount of data is needed to avoid overfitting and to give the network a good generalisation ability. Therefore, a large amount of defect class objects is needed. However, when it comes to producing defect classes, obtaining a dataset for training can be costly. To reduce this costs, artificial intelligence in the form of Generative Adversarial Networks (GANs) can be used to generate images without producing real objects of defect classes. This allows a cost-effective solution for any kind of underrepresented classes. However, the focus of this work is on defect classes. In this paper a comparison of GANs for data augmentation with classical data augmentation methods for simulating images of defect classes in an industrial context is presented. The results show the positive effect of both, classical and GAN-based data augmentation. By applying both methods parallel the best results for defect-class recognition tasks of datasets with underrepresented classes can be achieved.



https://doi.org/10.21014/actaimeko.v12i4.1320
Hake, Cornelius; Omlor, Markus; Breitbarth, Andreas; Notni, Gunther; Dilger, Klaus
Artificial intelligence methods for in-process high-speed image analysis in laser beam welding of hairpins. - In: NOLAMP- Nordic Laser Materials Processing Conference (19TH-NOLAMP-2023), 22/08/2023-24/08/2023, Turku, Finland, (2023), 012007, S. 1-13

In the production of modern electric drives for battery electric vehicles, hairpin technology is used to increase the copper fill factor in the stator of a permanently excited synchronous machine. A central process in the production of these stators is the contacting of the hairpin ends by means of laser beam welding. This welding process is characterized by geometric and process-related deviations from previous process steps, which influence the result of the welded joint. It is desirable to find an in-process method for monitoring. As part of the process monitoring of welded joints, high-speed camera images are often used to detect weld spatter. These can be detected by a program based on a static algorithm. For this reason, a feasibility analysis is performed regarding the application of AI for the detection of spatters, in which the methods of semantic segmentation and single-image classification prove to be useful. In a preliminary experiment, three base networks for each of the two methods are evaluated with respect to the best training results. The single-image classification method will then be extended by a subsequent static algorithm, so that a hybrid use of AI and static algorithm will be investigated. The evaluation and final comparison of all evaluation methods is performed using data from a welding experiment. It turns out that the hybrid approach of single-image classification and static algorithm has numerous advantages in the detection of spatter compared to semantic segmentation and the static algorithm.



https://doi.org/10.1088/1757-899X/1296/1/012007
Aliâc, Belmin; Zauber, Tim; Zhang, Chen; Liao, Wang; Wildenauer, Alina; Leosz, Noah; Eggert, Torsten; Dietz-Terjung, Sarah; Sutharsan, Sivagurunathan; Weinreich, Gerhard; Schöbel, Christoph; Notni, Gunther; Wiede, Christian; Seidl, Karsten
Contactless optical detection of nocturnal respiratory events. - In: Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP, (2023), S. 336-344

Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder characterized by the collapse of the upper airway and associated with various diseases. For clinical diagnosis, a patient’s sleep is recorded during the night via polysomnography (PSG) and evaluated the next day regarding nocturnal respiratory events. The most prevalent events include obstructive apneas and hypopneas. In this paper, we introduce a fully automatic contactless optical method for the detection of nocturnal respiratory events. The goal of this study is to demonstrate how nocturnal respiratory events, such as apneas and hypopneas, can be autonomously detected through the analysis of multi-spectral image data. This represents the first step towards a fully automatic and contactless diagnosis of OSA. We conducted a trial patient study in a sleep laboratory and evaluated our results in comparison with PSG, the gold standard in sleep diagnostics. In a study sample with three patients, 24 hours of recor ded video materials and 245 respiratory events, we have achieved a classification accuracy of 82 % with a random forest classifier.



https://doi.org/10.5220/0011694400003417