Welcome to Empas lab. 

This site is the homepage of EMPAS (Embedded and Parallel Systems) LAB

If you have any interest in my study, please look at this site and review my publications.


At this time, If you want research collaboration with me as a master or Ph.D candidate, please contact me via email (hyunjin2@dankook.ac.kr). 

(I do not allow any situation with one-side advice.)

(*인공지능의 구현과 응용,  Computational Storage, In-Memory Computing의 주제로 대학원 진학에 관심있는 학생은

(hyunjin2@dankook.ac.kr)로 연락바랍니다. 

Announcements

Five researchers attended the 9-th Open Robotics Seminar, held at Robotis Inc., Seoul. There were 13 sessions for introducing ROS applications. Picture


Eight researchers attended KCS 2023, held at High-One Resort, Kangwon-do. There were interesting topics including Processing-In-Memory, Hardware Accelerators, etc.  At free time, they enjoyed beautiful scenery and good sightseeing. Picture


We have five servers for machine learning and semiconductor design. These servers will be moved to our department's specific server room. We are happy to avoid noisy sound and hot air from them. 


Se-Jin Kwon, an undergraduate student, joined our EmPasLab. She has interest in digging the details and secrets of machine learning. 


At this time, Prof. Kim and Jung-Woo Shin have submitted their BNN (binarized neural networks) works to ICML 2023. ICML is one of top-tier AI conferences. Let's cross finger. 


During this winter vacation, several ideas and noticeable subjects for artificial intelligence and machine learning will be researched by student researchers and Prof. Kim. Notably, visual transformer, reinforcement learning for handling robot arm, TinyML, RISC-V based Posit arithmetics, BNNs are being focused. Good Luck!!


Young-Wook Kwon, a master candidate, joined our EmPasLab. He has interest in developing new visual transformer and inventing ideas for TinyML. 


From December 2022, new funds from SK Hynix (Dec. 2022~Nov. 2024) begin, where methods and user interface for building supervised data in wafer testing will be developed. 


At this time, advised students and Prof. Kim have submitted their TinyML works to ICCAD TinyML Contests. In a small embedded board, binarized neural networks can be implemented, which significantly reduces hardware costs in terms of latency and parameter storage. In this work, this team can fully understand practical implementation of TinyML and methods about how to debug TinyML results. 


At this time, Prof. Kim and other advised students have studied binarized neural networks (BNNs) from this summer. For 1-D data (e.g. sound), BNNs are a very realistic form to be implemented on lightweight MCUs. Besides, their low latency and low power consumption can extend the applicable area of neural networks. 


On June 26, 2022, a manuscript titled as "CTMQ: Cyclic Training of Convolutional Neural Networks with Multiple Quantization Steps" has been uploaded in arXiv.


On May 23/24, 2022, there is an open lab ceremony to explain the life of graduate schools and method for achieving research contributions. (#403, 2nd Engineering building, Dankook University) 


Prof. HyunJin Kim had a meeting with ARM to apply the ARM Academic Access (AAA) Program. The program supports ARM commercial IPs, training Program, and Materials for researches with ARM IPs.  


A paper titled "A storage-efficient ensemble classification using filter sharing on binarized convolutional neural networks" by HyunJin Kim, Mohammed Alnemari, and Nader Bagherzadeh has been accepted in PeerJ Computer Science.


A paper titled "PresB-Net: parametric binarized neural network with learnable activations and shuffled grouped convolution" by Jungwoo Shin and HyunJin Kim has been accepted in PeerJ Computer Science.


A paper titled "Highly accurate approximate multiplier using heterogeneous inexact 4-2 compressors for error-resilient application" by Jaewoo Lee and HyunJin Kim has been accepted in IEMEK Journal of embedded Systems and Applications (Domestic). 


A paper titled "A Cost-Efficient Approximate Dynamic Ranged Multiplication and Approximation-Aware Training on Convolutional Neural Networks" by HyunJin Kim and Alberto A. Del Barrio has been accepted in IEEE Access. 


A paper titled "PLAM: a Posit Logarithm-Approximate Multiplier" by Raul Murillo, Alberto A. Del Barrio, Guillermo Botella, Min Soo Kim, HyunJin Kim, Nader Bagherzadeh has been accepted in IEEE Transactions on Emerging Topics in Computing. 


A project titled "ACDNN: Approximate Computing-based Deep Neural Networks using Inaccurate Arithmetic Units for Low-Power Systems" will be supported by National Research Foundation (NRF) (June 2021 ~ February 2024) - 2021.05.27.


A paper titled “A k-Mismatch String Matching for Generalized Edit Distance using Diagonal Skipping Method” by HyunJin Kim has been accepted in PLOS One.


A paper titled "AresB-Net: accurate residual binarized neural networks using shortcut concatenation and shuffled grouped convolution" by HyunJin Kim has been accepted in PeerJ Computer Science. 


A paper titled "A Low-Cost Compensated Approximate Multiplier for Bfloat16 Data Processing on CNN Inference" by HyunJin Kim has been accepted in ETRI Journal. 


A Paper titled "Effects of Approximate Multiplication on Convolutional Neural Networks," Kim, M. S., Del Barrio, A. A., Kim, H., & Bagherzadeh, N has been published in the early access of IEEE Transactions on Emerging Topics in Computing