Abstract
Relocation of compact sets in an n-dimensional manifold by self-diffeomorphism is of its own interest as well as significant potential applications to data classification in data science. This paper presents a theory for relocating a finite number of compact sets in \mathbb{R}^n to be relocated to arbitrary target domains in \mathbb{R}^n by diffeomorphisms of \mathbb{R}^n. Furthermore, we prove that for any such collection, there exists a differentiable embedding into \mathbb{R}^{n+1} such that their images become linearly separable.
As applications of the established theory, we show that a finite number of compact datasets in \mathbb{R}^n can be made linearly separable by width-n deep neural networks (DNNs) with Leaky-ReLU, ELU, or SELU activation functions, under a mild condition. In addition, we show that any finite number of mutually disjoint compact datasets in \mathbb{R}^n can be made linearly separable in \mathbb{R}^{n+1} by a width-(n+1) DNN.