구성원

구성원

김동욱

Doheon Kim 김도헌 교수

Research Keywords
연구키워드
  • #dynamical systems #optimization #distributed systems #machine learning #differential equations
  • # 동역학계 # 최적화 # 분산 시스템 # 기계학습 # 미분방정식
Research Objectives
연구목표
  • Convergence analysis of population-based optimization algorithms
  • Convergence analysis of distributed optimization algorithms
  • Application of optimization algorithms to distributed training
  • 인구 기반 최적화 알고리즘의 수렴성 해석
  • 분산 최적화 알고리즘의 수렴성 해석
  • 최적화 알고리즘의 분산 훈련에의 응용
Brief Research Experience
주요경력
  • Assistant Professor, Dept. Applied Mathematics, Hanyang University (2022~present)
  • Associate Member, Korea Institute for Advanced Study (2022~present)
  • Research Fellow, Korea Institute for Advanced Study (2019~2022)
  • Published some papers in Mathematical Models and Methods in Applied Sciences, Numerische Mathematik, IEEE Transactions on Automatic Control, SIAM Journal on Control and Optimization, etc.
  • Research grant: National Research Foundation of Korea
  • 조교수, 한양대학교 응용수학과 (2022년~현재)
  • Associate Member, 고등과학원 (2022년~현재)
  • 연구원, 고등과학원 (2019년~2022년)
  • Mathematical Models and Methods in Applied Sciences, Numerische Mathematik, IEEE Transactions on Automatic Control, SIAM Journal on Control and Optimization 등의 저널에 주저자 논문 출판
  • 수행과제 지원 기관: 한국연구재단
Research Areas
연구분야
  • Population-based optimization algorithms
  • Distributed optimization
  • Machine learning
  • Kinetic equations
  • 인구 기반 최적화 알고리즘
  • 분산 최적화
  • 기계학습
  • 운동방정식
Research Topics
연구내용
  • Population-based optimization algorithm

    In a population-based optimization algorithm, a set of particles on the domain of the objective function mutates as time goes by, and eventually converges to a single point

    In an ideal population-based optimization algorithm, the limiting point would be the minimizer of the objective function.

    Most of these algorithms do not require evaluation of gradients of the objective function.

    So, they can be applied to situations where it is difficult to evaluate gradients of the objective function, e.g., hyperparameter tuning of artificial neural networks.

    Despite the popularity of these algorithms such as Particle Swarm Optimization, Consensus-Based Optimization, etc., their convergence analyses are not well-studied.

    I am studying convergence analyses of these algorithms.

  • Distributed optimization algorithm

    When the information of the objective function is distributed among multiple agents, a distributed optimization algorithm helps the agents find the minimizer of the objective function by cooperating with each other.

    To be more specific, these algorithms work in a situation where the objective function is the sum of multiple local functions, and information(such as function value, gradient, etc.) of each local function can be accessed by only one agent.

    This philosophy is used in distributed training, where the training data are distributed among several agents.

    Since 2008, many distributed optimization algorithms were proposed, and I am studying their convergence analyses.