Last week Jörg and Thomas attended the JMP Workshop “Versuchsplanung und Datenvisualisierung für die Prozessentwicklung” in Cologne, where Louis Valente gave an interesting talk about Visual Data Exploration. Louis Valente is the Global Technical Team Enablement Manager at JMP/SAS.
The IEEE World Congress on Computational Intelligence (IEEE WCCI) is the largest technical event in the field of computational intelligence, and its 2014 edition will host three conferences: The 2014 International Joint Conference on Neural Networks (IJCNN 2014), the 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2014), and the 2014 IEEE Congress on Evolutionary Computation (IEEE CEC 2014). IEEE WCCI 2014 will cross-fertilize among these three big areas and will provide a foremost forum for scientists, engineers, educators, and students from all over the world to discuss and present their research findings on computational intelligence. IEEE WCCI 2014 will be held in Beijing, the political, cultural, and educational center of China. The city’s history dates back three millennia. It is renowned for its palaces, temples, gardens, walls and gates, and its art treasures and universities have made it a center of culture and art in China. Continue reading
- Transparancy: Highly secure data center (Google, Amazon, etc.) store and process data and (promise to) make the world more transparent
- Identity: “Big Data seeks to identify, but it also threatens identity.”
- Power: Big Data promises a more precise understanding of the world. But it creates winners (Big institutions?) and losers (individuals?).
The article Richards, Neil M. and King, Jonathan H., Three Paradoxes of Big Data (September 3, 2013). 66 Stanford Law Review Online 41 (2013) is available at SSRN: http://ssrn.com/abstract=2325537
Evolutionary computation methods such as evolutionary algorithms or ant colony optimization have been successfully applied to a wide range of problems. This includes classical combinatorial optimization problems and many hard real-world optimization problems. Real-world problems are often hard to optimize by traditional methods as they are non-linear, highly constrained, multi-objective, and come with a wide range of uncertainties. Many different evolutionary computation methods for dealing with complex problems have been proposed over the years.
In contrast to the successful applications to extremely difficult problems, the theoretical understanding of these algorithms lags far behind the practical success. This is unfortunate, since a rigorous understanding of the working principles of evolutionary methods can lead to a better understanding under which situations a given type of algorithm works, and provide design guidelines for new powerful methods in practice.
Slides from the GECCO’13 tutorial “How to create meaningful and generalizable results” are freely available on SPOTSeven’s publication page.
Computational Intelligence (CI) methods have gained importance in several real-world domains such as process optimization, system identification, data mining, or statistical quality control. Tools, which determine the applicability of CI methods in these application domains in an objective manner, are lacking. Statistics provide methods for comparing algorithms on certain data sets. In the past, several test suites were presented and considered as state-of-the-art. However, there are several drawbacks of these test suites, namely:
- problem instances are somehow artificial and have no direct link to real-world settings;
- since there is a fixed number of test instances, algorithms can be fitted or tuned to this specific and very limited set of test functions. As a consequence, studies (benchmarks) provide insight into how these algorithms perform on this specific set of test instances, but no insight is gained in how they perform in general;
- statistical tools for comparing several algorithms on different test problem instances are relatively complex and results are not easy to analyze.
In this tutorial, a methodology to overcome these difficulties is presented. It is based on standard ideas from statistics: analysis of variance and its extension to mixed models, see, e.g. [Chia10a]. This tutorial combines essential ideas from two approaches: problem generation and statistical analysis of computer experiments. This framework extends ideas presented in [Bart11g].The generalization of results from multi-objective optimization problems will also be addressed.
In dem Wahlpflichtfach (WPF) 45 “Data Mining” werden u.a. die folgenden Themen behandelt:
- Wie bringen Suchmaschinen die Ergebnisse einer Suchanfrage in eine vernünftige Reihenfolge?
- Wie funktioniert der PageRank-Algorithmus?
- Wie deckt man in sozialen Netzwerken oder oder anhand von Verbindungsdaten Gruppen von Individuen auf, die zusammen gehören?
- Wie erzeugt man Profile von Internetnutzern, um deren Verhalten vorherzusagen oder Werbemaßnahmen zielgerecht zu platzieren?
- Wie funktionieren WebCrawler?
- Was kann man automatisch Text aus bestehenden Internetquellen mit Wrappern extrahieren und klassifizieren?
- Wie betreibt man Meinungsforschung im Internet, also z..B. ob ein bestimmter Text einem Produkt positiv oder negativ gegenübersteht?
- Was versteht man unter WEB Usage-Mining?
- Wie kann man die Analyse und Mustersuche in Web-Server-Log-Dateien durchführen?
Das WPF wird gemeinsam mit den Professorinnen Dr. Faeskorn-Woyke und Dr. Leopold angeboten. Die Vorbesprechung des WPFs Data Mining findet am 1.10.2013 um 13:00 in Raum 2.108 statt. Weitere Informationen finden Sie auf meiner Lehre WWW-Seite MathAndMore.
The new issue of the International Society on Multiple Criteria Decision Making newsletter is published. Please visit http://mcdmsociety.org/MCDMNews/.