Under the supervision of Imperial College MatchLab Laboratory, under Keras/Tensorflow framework, designed and implemented different neuron networks to prototype results from previous researches, conducted controlled experiments to reason the principles behind and presented two 20-page reports in Latex.
- Regularization for Deep Learning
- Optimization for Training Deep Models
- Convolutional Networks
- Sequence Modelling: Recurrent and Recursive Nets
- Representation Learning
- Structured Probabilistic Models for Deep Learning
- Deep Generative Models
- Reinforcement learning
Tasks of interest
1. Investigate the impact of different CNN components on classification and regression tasks, such as the number and size of filters, depth of architecture, number of FC layers, types of activation function and pooling, etc.
2. Investigate transfer learning, fine-tuning, hyper-parameter presetting and tuning, regularization and optimisers, analysis and understanding of training/validating curves as output by different common CNN networks
3. Investigate RNN time series forecasting, word embedding comparison, semantic matching based on LSTM, Transformer, text generation, impact of different temperature values in NLP tasks
4. Investigate different auto-encoders and compare the performance of their feature representation with PCA; research different loss functions for an image-denoising task
5. Investigate the qualitative/quantitative results of generative models such as VAE and GANs
6. Implement a deep reinforcement learning network structure for CartPole game, compare performances of different agents with Q-learning, SARSA, ε-greedy and other policies