Vergangene Vorträge im Joint Analysis Seminar
In den vergangenen sechs Monaten haben keine Veranstaltungen stattgefunden.
Vergangene Vorträge im Oberseminar
Sakirudeen Abdulsalaam (RWTH Aachen University):
Phase recovery from masked measurements
It is commonly known that Fourier phase is oftentimes more crucial than Fourier magnitude in reconstructing a signal from its Fourier transform. However, in many real-life measurement systems, only the magnitude square of the Fourier transform of the underlying signal is available. This may be due to the fact that the phase is lost or expensive/impractical to measure. The problem of reconstructing a signal from its Fourier magnitude is known as phase retrieval problem. Reconstructing a signal from its Fourier magnitude only is generally very difficult. In this talk, we will consider the recent Convex Optimization technique for phase recovery from Fourier measurements with random mask.
(Abstract ein-/ausblenden)
Emmanuel Abbe (EPFL):
Leap complexity and generalization on the unseen
Boolean functions with sparse Fourier transform on uniform inputs are known to be efficiently learnable in the PAC model.
Here we consider the more specific model of learning with SGD on 'regular' neural networks.
We claim that the sample complexity of learning sparse target functions in this model is controlled by a new "leap" complexity measure,
which measures how "hierarchical" target functions are in the orthonormal L2 basis (Fourier transform for Boolean functions).
For depth 2, we prove such a claim: we show that a time complexity of d^Leap is sufficient for a (layerwise projected) SGD and necessary fo noisy GD.
In particular, it is shown that SGD learns such functions with a saddle-to-saddle dynamic by climbing the degree of hierarchical features.
We then discuss consequences for out-of-distribution generalization and how this leads to a new 'degree curriculum' learning algorithm.
Joint works with E. Boix (MIT), T. Misiakiewicz (Stanford) and S. Bengio (Apple), A. Lotfi (EPFL).
(Abstract ein-/ausblenden)
Simon S. Du (University of Washington):
Passive and Active Multi-Task Representation Learning
Representation learning has been widely used in many applications. In this talk, I will present our work which uncovers when and why representation learning provably improves the
sample efficiency, from a statistical learning point of view. Furthermore, I will talk about how to actively select the most relevant task to boost the performance.
(Abstract ein-/ausblenden)
Arinze Folarin (RWTH Aachen University):
Recovery of low rank tensors via tractable algorithms
Investigating the feasibility of tensor recovery via tractable algorithms is a current and critical area of study as it has widespread applications in fields like image processing, machine learning, and scientific simulations. This work builds upon previous achievements in the field of compressed sensing and low-rank matrix recovery. Despite some sub-optimal or near-optimal results achieved in tensor recovery, the aim of this research is to determine the optimal amount of measurements required for low-rank tensor recovery through the use of tractable algorithms such as Iterative hard thresholding and Riemannian gradient iteration.
(Abstract ein-/ausblenden)
Robert Kunsch (RWTH Aachen University):
Randomized approximation of vectors - lower bounds and adaption
We consider linear problems within the framework of information-based complexity. It is well known that a linear problem can be solved by deterministic algorithms at arbitrary precision iff the solution operator is compact. Whether or not the corresponding statement holds for randomized algorithms, however, is still unknown. We approach this problem by studying in particular the approximation of finite-dimensional. Surprisingly, adaption does make a difference in the randomized setting.
(Abstract ein-/ausblenden)
Frederik Hoppe (RWTH Aachen University):
Uncertainty quantification for sparse Fourier recovery
One of the most prominent methods for uncertainty quantification in high-dimensional statistics is the desparsified LASSO that relies on unconstrained l_1-minimization. The majority of initial works focused on real (sub-)Gaussian designs. However, in many applications, such as magnetic resonance imaging (MRI), the measurement process possesses a certain structure due to the nature of the problem. The measurement operator in MRI can be described by a subsampled Fourier matrix. We extend the uncertainty quantification process using the desparsified LASSO to design matrices originating from a bounded orthonormal system, which naturally generalizes the subsampled Fourier case and also allows for the treatment of the case where the sparsity basis is not the standard basis. In particular we construct confidence intervals for every pixel of an MR image that is sparse in the standard basis provided the number of measurements satisfies n > max ( s log^2 s log p, s log^2 p) or that is sparse with respect to the Haar Wavelet basis provided a slightly larger number of measurements. (Joint work with Felix Krahmer, Claudio Mayrink Verdun, Marion I. Menzel and Holger Rauhut.)
(Abstract ein-/ausblenden)
Wiebke Bartolomaeus (RWTH Aachen University):
Implicit Bias for reconstructing sparse complex signals
In deep learning one is often in a (highly) overparametrized setting. Meaning we have way more learnable parameters than available training data. Nevertheless, experiments show that the generalization error after training with (stochastic) gradient descent is still small, while one would expect overfitting, i.e. small training error and relatively large test error. So there is an implicit bias towards learning networks that generalize well, in settings where infinitely many networks can achieve zero training loss. We study this phenomenon by recovering sparse complex signals from linear (complex) measurements via introducing overparameterization on the signal. We work in cartesian coordinates as well as polar coordinates and study the resulting gradient flows. For this, we introduce the notation of Wirtinger derivatives. We accompany this with some numerical findings.
(Abstract ein-/ausblenden)