Some past & current clients
NDC Garbe, Munich
Velo 3D, Fremont, CA
Zaha Hadid Architects, London
Karamba3D, Vienna
Robert McNeel & Associates, Seattle, WA
Company |
Location |
Job title |
Format |
Years |
various clients |
Munich |
Consultant |
Freelance |
ongoing |
Karamba3d.com |
Vienna |
Mathematical Consultant and Programmer |
Freelance |
2020 |
Zaha Hadid Architects |
London |
Senior Researcher |
|
2016 - 2020 |
Ludwig Maximilian University |
Munich |
Postdoctoral Researcher |
|
2013 – 2016 |
TNG Technology Consulting GmbH |
Munich |
Statistical Consultant |
Freelance |
2014 |
Georg August University |
Goettingen |
Postdoctoral Researcher |
|
2009 - 2012 |
Georg August University, Institute Numerical and Applied Mathematics |
Goettingen |
Visiting Professor |
|
2008 |
Georg August University |
Goettingen |
Postdoctoral Researcher |
|
2007 – 2008 |
Aix-Marseille University |
Marseille |
Doctorate Researcher at Graduate School |
|
2003 - 2007 |
A complete overview of the finite element modeling steps for the special case of the stationary heat equation in engineering.
We show that that the error rate of a supervised learning algorithm can be estimated without bias as long as the test set is at least twice as large as the learning set.
We show a way to make the results of pedestrian simulation availabe to the designer. It relies on a spatial regression of the simulated density on distance fields and other architectural features.
My first architectural paper. We described outputs of a finite element software in terms of geometrical features, by a linear regression. This was fun and interesting, but the most important part was finding out that the surface normal's z component was an important explanatory variable in that linear regression. We found out about that by investigating the Airy approach to structural mechanics more closely.
Here, we took a look at the problem of integrating different group of covariables, the prototypical situation being when mRNA and miRNA are to be integrated. We pursue a fairly immediate line of thought: to apply different lasso penalizations to each of them. The details, however, are quite nitty-gritty. We show how penalized regression generalises naturally to multiple data sources, by penalising each modality differently.
Yet another topic: theoretical statistics. This is my second favorite paper on this side. We identified and computed certain covariances between evaluations of error estimators. Furthermore, we identified U-statistics to show how these covariances can be estimated.
Back to pure Mathematics. This paper is in spirit not too far away from that of my PhD thesis. I think this paper is the one I am most proud of.
Yet another topic: a general overview of data combination strategies. In retrospect, there are better ones out there, though, than the ones we looked at.
Once again, a completely different topic: biology, and in particular developmental biology. The primitive streak is the first morphogenic feature in the nascent mammalian embryo. We applied circular statistics to show that cell divisions prefer certain directions.
My first non-mathematical paper: Some work on biological databases, of a rather algebraic/non-quantitative flavor. Biological databasesnothing I really worked on before or after anytime again ....
The title says it all: we compute the group homology of a certain class of groups acting on hyperbolic three-space. Was (mostly) fun. I learned a lot about homological algebra, and about fundamental sets of actions.
Thesis
PhD thesis: M. Fuchs, "Cohomologie cyclique des produits croises associes aux
groupes de Lie". Written at the Institut de Mathematiques de Luminy under the direction of Michael
Puschnigg at the Universite de la Mediterranee Aix-Marseile,
https://arxiv.org/abs/math/0612023
I give a completely new proof of the fact that the group ring of torsion-free discrete co-compact subgroups of SL(n, C) satisfies the Kadison-Kaplansky conjecture: it is free of non-trivial idempotents. Unfortunately, I never came back to that topic later.
Technical ReportsA simulation study on supervised learning is when the following process is carried out repeatedly, with independent repetitions: One draws a learning sample from a distribution. The size of the learning sample, i.e., the number of observations it is comprised of, is to be kept the same throughout. Each observation consists of predictors or (covariables), and an outcome or value of response variable, for instance, a binary prediction target. Furthermore, one evaluates the performance of the predictive model thus achieved, by evaluating it on a test set. Averageing out across all independent iterations is guaranteed to converge towards the true expectated value of the error estimator - the true error. It might seem a little unintuitive that the latter statement holds true regardless of the size of the test set. The reason is that the mentioned expected value is unaffected by the size of the test set. Therefore, it is just a matter of computational convenience to choose it carefully. It is the goal of this paper to perform that choice in an educated way, as a function of the machine computation execution speed.
We show a bunch of things about the errror estimator of a machine learning algorithm, on a real data sample of fxed size (typically denoted by n in statistics). Among several statements, we show that the studentized error estimator is asymptotically normally distributed. This is a new contribution.