Publikationen an der Fakultät für Informatik und Automatisierung ab 2015

Anzahl der Treffer: 1956
Erstellt: Wed, 17 Jul 2024 23:08:55 +0200 in 0.1026 sec


Rzanny, Michael Carsten; Wittich, Hans Christian; Mäder, Patrick; Deggelmann, Alice; Boho, David; Wäldchen, Jana
Image-based automated recognition of 31 Poaceae species: the most relevant perspectives. - In: Frontiers in plant science, ISSN 1664-462X, Bd. 12 (2022), 804140, S. 1-12

Poaceae represent one of the largest plant families in the world. Many species are of great economic importance as food and forage plants while others represent important weeds in agriculture. Although a large number of studies currently address the question of how plants can be best recognized on images, there is a lack of studies evaluating specific approaches for uniform species groups considered difficult to identify because they lack obvious visual characteristics. Poaceae represent an example of such a species group, especially when they are non-flowering. Here we present the results from an experiment to automatically identify Poaceae species based on images depicting six well-defined perspectives. One perspective shows the inflorescence while the others show vegetative parts of the plant such as the collar region with the ligule, adaxial and abaxial side of the leaf and culm nodes. For each species we collected 80 observations, each representing a series of six images taken with a smartphone camera. We extract feature representations from the images using five different convolutional neural networks (CNN) trained on objects from different domains and classify them using four state-of-the art classification algorithms. We combine these perspectives via score level fusion. In order to evaluate the potential of identifying non-flowering Poaceae we separately compared perspective combinations either comprising inflorescences or not. We find that for a fusion of all six perspectives, using the best combination of feature extraction CNN and classifier, an accuracy of 96.1% can be achieved. Without the inflorescence, the overall accuracy is still as high as 90.3%. In all but one case the perspective conveying the most information about the species (excluding inflorescence) is the ligule in frontal view. Our results show that even species considered very difficult to identify can achieve high accuracies in automatic identification as long as images depicting suitable perspectives are available. We suggest that our approach could be transferred to other difficult-to-distinguish species groups in order to identify the most relevant perspectives.



https://doi.org/10.3389/fpls.2021.804140
Eisenbach, Markus; Aganian, Dustin; Köhler, Mona; Stephan, Benedict; Schröter, Christof; Groß, Horst-Michael
Visual scene understanding for enabling situation-aware cobots. - Ilmenau : Universitätsbibliothek. - 1 Online-Ressource (2 Seiten)Publikation entstand im Rahmen der Veranstaltung: IEEE International Conference on Automation Science and Engineering ; 17 (Lyon, France) : 2021.08.23-27, TuBT7 Special Session: Robotic Control and Robotization of Tasks within Industry 4.0

Although in the course of Industry 4.0, a high degree of automation is the objective, not every process can be fully automated - especially in versatile manufacturing. In these applications, collaborative robots (cobots) as helpers are a promising direction. We analyze the collaborative assembly scenario and conclude that visual scene understanding is a prerequisite to enable autonomous decisions by cobots. We identify the open challenges in these visual recognition tasks and propose promising new ideas on how to overcome them.



https://doi.org/10.22032/dbt.51471
Simon, Rowena; Klemm, Matthias; Meller, Daniel; Hammer, Martin
Spectral calibration of fluorescence lifetime imaging ophthalmoscopy. - In: Acta ophthalmologica, ISSN 1755-3768, Bd. 100 (2022), 2, S. e612-e613

https://doi.org/10.1111/aos.14950
Gao, Hui; Kuang, Hongyu; Ma, Xiaoxing; Hu, Hao; Lü, Jian; Mäder, Patrick; Egyed, Alexander
Propagating frugal user feedback through closeness of code dependencies to improve IR-based traceability recovery. - In: Empirical software engineering, ISSN 1573-7616, Bd. 27 (2022), 2, 41, insges. 53 S.

Traceability recovery captures trace links among different software artifacts (e.g., requirements and code) when two artifacts cover the same part of system functionalities. These trace links provide important support for developers in software maintenance and evolution tasks. Information Retrieval (IR) is now the mainstream technique for semi-automatic approaches to recover candidate trace links based on textual similarities among artifacts. The performance of IR-based traceability recovery is evaluated by the ranking of relevant traces in the generated lists of candidate links. Unfortunately, this performance is greatly hindered by the vocabulary mismatch problem between different software artifacts. To address this issue, a growing body of enhancing strategies based on user feedback is proposed to adjust the calculated IR values of candidate links after the user verifies part of these links. However, the improvement brought by this kind of strategies requires a large amount of user feedback, which could be infeasible in practice. In this paper, we propose to improve IR-based traceability recovery by propagating a small amount of user feedback through the closeness analysis on call and data dependencies in the code. Specifically, our approach first iteratively asks users to verify a small set of candidate links. The collected frugal feedback is then composed with the quantified functional similarity for each code dependency (called closeness) and the generated IR values to improve the ranking of unverified links. An empirical evaluation based on nine real-world systems with three mainstream IR models shows that our approach can outperform five baseline approaches by using only a small amount of user feedback.



https://doi.org/10.1007/s10664-021-10091-5
Klee, Sascha; Link, Dietmar
Neuronal sources of visually evoked potentials using selective color opponent channel stimulation. - In: Acta ophthalmologica, ISSN 1755-3768, Bd. 100 (2022), S267, insges. 1 S.

https://doi.org/10.1111/j.1755-3768.2022.064
Link, Dietmar; Krauss, Benedikt; Stodtmeister, Richard; Nagel, Edgar; Vilser, Walthard; Klee, Sascha
Determination of the tonographic effect in the human eye using a pneumatic pressure modulator. - In: Acta ophthalmologica, ISSN 1755-3768, Bd. 100 (2022), S267, insges. 1 S.

https://doi.org/10.1111/j.1755-3768.2022.076
Schramm, Stefan; Dietzel, Alexander; Blum, Maren-Christina; Link, Dietmar; Klee, Sascha
Light-field fundus imaging under astigmatism - an eye model study. - In: Acta ophthalmologica, ISSN 1755-3768, Bd. 100 (2022), S267, insges. 1 S.

https://doi.org/10.1111/j.1755-3768.2022.100
Dietzel, Alexander; Schramm, Stefan; Blum, Maren-Christina; Link, Dietmar; Klee, Sascha
Optic nerve head assessment in light-field fundus images- a case study. - In: Acta ophthalmologica, ISSN 1755-3768, Bd. 100 (2022), S267, insges. 1 S.

https://doi.org/10.1111/j.1755-3768.2022.113
Blum, Maren-Christina; Klee, Sascha
Influence of pre-adaptation time to background illumination on photopic negative response of the full-field electroretinogram. - In: Acta ophthalmologica, ISSN 1755-3768, Bd. 100 (2022), S267, insges. 1 S.

https://doi.org/10.1111/j.1755-3768.2022.062
Fiedler, Patrique; Fonseca, Carlos; Supriyanto, Eko; Zanow, Frank; Haueisen, Jens
A high-density 256-channel cap for dry electroencephalography. - In: Human brain mapping, ISSN 1097-0193, Bd. 43 (2022), 4, S. 1295-1308

High-density electroencephalography (HD-EEG) is currently limited to laboratory environments since state-of-the-art electrode caps require skilled staff and extensive preparation. We propose and evaluate a 256-channel cap with dry multipin electrodes for HD-EEG. We describe the designs of the dry electrodes made from polyurethane and coated with Ag/AgCl. We compare in a study with 30 volunteers the novel dry HD-EEG cap to a conventional gel-based cap for electrode-skin impedances, resting state EEG, and visual evoked potentials (VEP). We perform wearing tests with eight electrodes mimicking cap applications on real human and artificial skin. Average impedances below 900 k[Ohm] for 252 out of 256 dry electrodes enables recording with state-of-the-art EEG amplifiers. For the dry EEG cap, we obtained a channel reliability of 84% and a reduction of the preparation time of 69%. After exclusion of an average of 16% (dry) and 3% (gel-based) bad channels, resting state EEG, alpha activity, and pattern reversal VEP can be recorded with less than 5% significant differences in all compared signal characteristics metrics. Volunteers reported wearing comfort of 3.6 ± 1.5 and 4.0 ± 1.8 for the dry and 2.5 ± 1.0 and 3.0 ± 1.1 for the gel-based cap prior and after the EEG recordings, respectively (scale 1-10). Wearing tests indicated that up to 3,200 applications are possible for the dry electrodes. The 256-channel HD-EEG dry electrode cap overcomes the principal limitations of HD-EEG regarding preparation complexity and allows rapid application by not medically trained persons, enabling new use cases for HD-EEG.



https://doi.org/10.1002/hbm.25721