Seoul National Univ. DMSE
Notice

Seminar & Colloquium

Seminar & Colloquium
[세미나: 02월 09일(수), 오전 10시] The basic theory of deep reinforcement learning(DRL), the main methods and their practical applications Functional Materials for Energy Applications

Speaker

Soo Kyung Kim, Ph.D Lawrence Livermore National Lab

 

A. Education

Georgia Institute of Technology, Atlanta, Georgia, USA May 2017

Ph.D. in Computational Material Science (Advisor: Prof. Hamid Garmestani)

M.S. in Computational Science & Engineering (Advisor: Prof. Richard Fujimoto)

Thesis: Hybrid Computational Modeling of Thermomagnetic Material Systems

 

Columbia University, New York, New York, USA May 2009

M.S. in Electrical Engineering

 

Ewha Woman’s University, Seoul, Korea Jun 2007

B.S. in Electrical Engineering, Minor in Physics (Summa Cum Laude)

 

UC Berkeley, Berkeley, California, USA Jun  Aug 2005

Exchange Program

 

B. Employment

Lawrence Livermore National Lab., Livermore, CA, USA Jan 2017 -Present

Machine Learning Staff Scientist, Center for Applied Scientific Computing

- Optimizing discrete symbolic equation using Reinforcement Learning (Supervisor: Brenden Petersen, Daniel M. Faissol)

- Developing a framework that leverages reinforcement learning for symbolic regression.

- Applying the developed framework for interpretability, focusing on control dynamics and

medical domain.

- ATOM: AI-driven drug design and discovery (Supervisor: Jon Allen)

- Developing a multi-task drug property prediction model, elaborating Neural Collaborative

Filtering based on 2D scaffold of drug targets and their kinase properties.

- Earth Science and Grid Federation (ESGF) group (Supervisor: Dean N. Williams)

- Tracking and forecasting extreme climate events using video prediction models.

- Material Informatics (Supervisor: Han. T. Yong)

- Predicting density of molecule using Graph Neural Networks, Junction-tree VAE, and generative models.

- Predicting 3D geometry of crystal structure of High Energy Molecule using Reinforcement

Learning.

 

Sandia National Lab., Livermore, CA, USA Jan 2016 -Nov 2016

Research Scientist Intern, Hydrogen and Materials Science Department

(Supervisor: Jonathan Zimmerman, Catalin Spataru)

- Developing Monte-Carlo software based on LSF spin model in C++. Analyzing data from ab-initio

DFT using machine learning.

-Studying high temperature Spin-coupling effect to stacking-fault-energy in stainless steel.

 

Lawrence Livermore National Lab., Livermore, CA, USA Jun - Aug 2014, Jun - Dec 2015

Research Scientist Intern - CCMS program, Physics and Lifescience Division

(Supervisor: Lorin Benedict, Mike Surh)

- Developing Monte-Carlo software based on Heisenberg model in C++, statistically simulating spin

thermo-dynamics of F eCoxB(1?x) and CoP t.

- GPU-utilized parallelization of the Heisenberg Monte-Carlo software.

 

Pacific Northwest National Lab., Richland, WA, USA Oct 2011 - Dec 2012

Research Student Intern, Advanced Computing, Mathematics and Data Division (Supervisor: Kim

Ferris)

- Computing thermo-magnetic properties for MnBi/MnSb using ab-initio MD (NWChem) and

Abinitio DFT.

-Constructing a solvent based carbon capture materials database using SQL to analyze solid sorption materials with their kinetic and thermodynamic parameters.

 

| Date | Wednesday, Febraury 9th, 2022

| Time | 10:00~12:00, 14:00~16:00

| Venue | 비대면 (zoom 링크 -https://snu-ac-kr.zoom.us/j/89344132229 )

 

 

 

Abstract

Deep reinforcement learning (DRL) has shown remarkable success in the last few years in solving a wide-range of?difficult control problems.?DRL owes much of its success to recent advances in training deep neural networks (NNs), which are commonly employed as function approximators for an RL policy. 

In this lecture series, I will introduce the basic theory of reinforcement learning, the main methods and their practical applications. This three lecture series consists of (1) key concepts of RL, (2) DQN and policy gradient and (3) scientific application?of DRL. 

First and second lecture consists of introduction of theory and python tutorial based on tensorflow 2.0, and third lecture consists of one hour talk. Lecture notes are found in this link(lecture 5,6,7):?https://drive.google.com/drive/folders/1XofJrZkdlS4BnK1I6UzEypc-qdZ5L_mN?usp=sharing? Tutorial codes are found in this link:?https://github.com/fastscience-ai/RL_toturial_AIAI2022

 

| Host | Prof. Seungwu- Han (02-880-7088)