Literaturliste aus der Hochschulbibliographie

Bitte beachten Sie, dass die Hochschulbibliographie den Datenstand 31.07.2024 hat.
Alle neueren Einträge finden Sie in der Universitätsbibliographie der Technischen Universität Ilmenau (TUUniBib).

Anzahl der Treffer: 62
Erstellt: Thu, 26 Sep 2024 06:42:02 +0200 in 0.1119 sec


Dietzfelbinger, Martin; Peilke, Hendrik; Rink, Michael
A more reliable greedy heuristic for maximum matchings in sparse random graphs. - In: Experimental Algorithms, (2012), S. 148-159

http://dx.doi.org/10.1007/978-3-642-30850-5_14
Dietzfelbinger, Martin; Mitzenmacher, Michael; Mitzenmacher, Michael *1969-*; Rink, Michael
Cuckoo hashing with pages. - In: Algorithms - ESA 2011, (2011), S. 615-627

http://dx.doi.org/10.1007/978-3-642-23719-5
Dietzfelbinger, Martin;
Fingerprinting. - In: Algorithms unplugged, (2011), S. 181-193

Dietzfelbinger, Martin; Rowe, Jonathan E.; Wegener, Ingo; Wölfel, Philipp
Precision, local search and unimodal functions. - In: Algorithmica, ISSN 1432-0541, Bd. 59 (2011), 3, S. 301-322

We investigate the effects of precision on the efficiency of various local search algorithms on 1-D unimodal functions. We present a (1+1)-EA with adaptive step size which finds the optimum in O(log n) steps, where n is the number of points used. We then consider binary (base-2) and reflected Gray code representations with single bit mutations. The standard binary method does not guarantee locating the optimum, whereas using the reflected Gray code does so in O ((log n)2) steps. A(1+1)-EA with a fixed mutation probability distribution is then presented which also runs in O ((log n)2). Moreover, a recent result shows that this is optimal (up to some constant scaling factor), in that there exist unimodal functions for which a lower bound of Omega ((log n)2) holds regardless of the choice of mutation distribution. For continuous multimodal functions, the algorithm also locates the global optimum in O((log n)2). Finally, we show that it is not possible for a black box algorithm to efficiently optimise unimodal functions for two or more dimensions (in terms of the precision used).



http://dx.doi.org/10.1007/s00453-009-9352-x
Vöcking, Berthold; Alt, Helmut; Dietzfelbinger, Martin; Reischuk, Rüdiger; Scheideler, Christian; Vollmer, Heribert; Wagner, Dorothea
Algorithms unplugged. - Berlin, Heidelberg : Springer-Verlag, 2011. - Online-Ressource (X, 406 S.) ISBN 978-3-642-15328-0
Description based upon print version of record

Algorithms specify the way computers process information and how they execute tasks. Many recent technological innovations and achievements rely on algorithmic ideas - they facilitate new applications in science, medicine, production, logistics, traffic, communi cation and entertainment. Efficient algorithms not only enable your personal computer to execute the newest generation of games with features unimaginable only a few years ago, they are also key to several recent scientific breakthroughs - for example, the sequencing of the human genome would not have been possible without the invention of new algorithmic ideas that speed up computations by several orders of magnitude. The greatest improvements in the area of algorithms rely on beautiful ideas for tackling computational tasks more efficiently. The problems solved are not restricted to arithmetic tasks in a narrow sense but often relate to exciting questions of nonmathematical flavor, such as: How can I find the exit out of a maze? How can I partition a treasure map so that the treasure can only be found if all parts of the map are recombined? How should I plan my trip to minimize cost? Solving these challenging problems requires logical reasoning, geometric and combinatorial imagination, and, last but not least, creativity - the skills needed for the design and analysis of algorithms. In this book we present some of the most beautiful algorithmic ideas in 41 articles written in colloquial, nontechnical language. Most of the articles arose out of an initiative among German-language universities to communicate the fascination of algorithms and computer science to high-school students. The book can be understood without any prior knowledge of algorithms and computing, and it will be an enlightening and fun read for students and interested adults.



http://dx.doi.org/10.1007/978-3-642-15328-0
Dietzfelbinger, Martin;
Ingo Wegener: seine Bücher. - In: Informatik-Spektrum, ISSN 1432-122X, Bd. 33 (2010), 4, S. 485

http://dx.doi.org/10.1007/s00287-010-0463-1
Dietzfelbinger, Martin; Goerdt, Andreas; Mitzenmacher, Michael; Montanari, Andrea; Pagh, Rasmus; Rink, Michael
Tight thresholds for cuckoo hashing via XORSAT. - In: Automata, languages and programming, (2010), S. 213-225

http://dx.doi.org/10.1007/978-3-642-14165-2_19
Dietzfelbinger, Martin; Rowe, Jonathan E.; Wegener, Ingo; Wölfel, Philipp
Tight bounds for blind search on the integers and the reals. - In: Combinatorics, probability & computing, ISSN 1469-2163, Bd. 19 (2010), 5/6, S. 711-728

https://doi.org/10.1017/S0963548309990599
Aumüller, Martin; Dietzfelbinger, Martin; Dietzfelbinger, Martin *1956-*; Rink, Michael
Experimental variations of a theoretically good retrieval data structure. - In: Algorithms - ESA 2009, (2009), S. 742-751

A retrieval data structure implements a mapping from a set S of n keys to range R={0,1}^r , e.g. given by a list of key-value pairs (x,v)?S×R, but an element outside S may be mapped to any value. Asymptotically, minimal perfect hashing allows to build such a data structure that needs n*log_2(e)+nr+o(n) bits of memory and has constant evaluation time. Recently, data structures based on other approaches have been proposed that have linear construction time, constant evaluation time and space consumption O(nr) bits or even (1+?)nr bits for arbitrary ?>0. This paper explores the practicability of one such theoretically very good proposal, bridging a gap between theory and real data structures.



http://dx.doi.org/10.1007/978-3-642-04128-0_66
Belazzougui, Djamal; Botelho, Fabiano C.; Dietzfelbinger, Martin
Hash, displace, and compress. - In: Algorithms - ESA 2009, (2009), S. 682-693

A hash function h, i.e., a function from the set U of all keys to the range range [m]={0,...,m-1} is called a perfect hash function (PHF) for a subset S?U of size n?m if h is 1-1 on S. The important performance parameters of a PHF are representation size, evaluation time and construction time. In this paper, we present an algorithm that permits to obtain PHFs with expected representation size very close to optimal while retaining O(n) expected construction time and O(1) evaluation time in the worst case. For example in the case m=1.23n we obtain a PHF that uses space 1.4 bits per key, and for m=1.01n we obtain space 1.98 bits per key, which was not achievable with previously known methods. Our algorithm is inspired by several known algorithms; the main new feature is that we combine a modification of Pagh's hash-and-displaceъ approach with data compression on a sequence of hash function indices. Our algorithm can also be used for k-perfect hashing, where at most k keys may be mapped to the same value.



http://dx.doi.org/10.1007/978-3-642-04128-0_61