A happy geek: I love learning new skills and enjoy building large systems, almost just for the sake of it. Software Engineer with experience in all levels of projects, including design and architecture, development and test, and the setup of reliable production. Skilled at writing well designed low-level system programs using best practices in Go, C, C++. Fast learner, hard worker, and team player with flexibility using various tools. Dedicated to streamlining processes and efficiently resolving project issues in hand using the most adapted technology.
“My dear, here we must run as fast as we can, just to stay in place. And if you wish to go anywhere you must run twice as fast as that.” Lewis Carroll, Alice in Wonderland
Master in Computer Science, 2016
Preparatory School in Physics and Chemestry, 2013
PacketAI aims to develop an IT infrastructure monitoring platform, similar to Datadog and Dynatrace, but equipped with Machine Learning to predict incidents in advance and locate their root cause.
I started when PacketAI had just received its seed funding, with only two other developers. I was able to quickly get a grasp on their stack, and within days of my arrival, I started adding new features to the agent, a software running on the client hosts collecting events and metrics. I designed and developed from scratch all PacketAI microservices, all in Go, plus a Logstash node.
docker-compose. I was involved in the development of the CI/CD pipelines of our Go projects on GitLab.
CSRC is a publicly-funded research center within KAIST university. I was free to define the problems I worked on, and figure potential solutions, then develop and design their implementation, and finally test and evaluate these prototypes. This experience allowed me to demonstrate my abstraction ability: find solutions based on principles.
I also made full use of my engineering mind: I completed three large projects. First, a modification of the code of Linux Kernel memory allocation for Drivers (in C). Second, an improvement of the dynamic testing tool of LLVM, a compiler infrastructure project written in C++. This project was merged into the mainline by a team at Google. And lastly, Ankou, my largest project, is a fuzzer I developed from scratch in Go. Ankou found more than a thousand unique crashes in open source projects.
This is a checklist of the advice given by Dale Carnegie in his famous book, ‘How to Win Friends and Influence People’. The book starts with three general principles followed by more practical advice. Although, the whole idea of being nice to others, considering their opinion (etc..) is quite simple in theory, it is not our first instinct and it is hard to put into practice. So I thought of making a short checklist to regularly refresh our minds could be beneficial to all.
We all want to predict the future. Once we know what will happen, we can prepare and take advantage of the situation and become stronger. For example, if you know which stock will go up/down, you would make a lot of money. The ability to predict is the foundation of science: experiment to find models that can forecast the future. Nassim Taleb destroyed this idea in his book ‘The Black Swan’: The most impactful events, the game changers, are unpredictable.
Hi everyone. This is my first ‘technical’ blog post. I saw some people saying it helps growing your explanation skills which I sincerely lack. Thus, I decided next I struggle doing something because I feel it’s quite undocumented, I’ll try to make a post and explain how I did it. If even one person reads this and it’s even remotely useful to them, I’ll consider the job done. Ask any question, I’ll be happy to answer.
Entropic is an information-theoretic power schedule implemented based on LibFuzzer. It boosts performance by changing weights assigned to the seeds in the corpus. Seeds revealing more “information” are assigned a higher weight. Entropic has been independently evaluated by a team at Google and invited for integration into mainline LibFuzzer @ LLVM (C++ code base), whereupon Entropic was subject to a substantial code reviewing process.
Grey-box fuzzing is an evolutionary process, which maintains and evolves a population of test cases with the help of a fitness function. Fitness functions used by current grey-box fuzzers are not informative in that they cannot distinguish different program executions as long as those executions achieve the same coverage. The problem is that current fitness functions only consider a union of data, but not their combination. As such, fuzzers often get stuck in a local optimum during their search. In this paper, we introduce Ankou, the first grey-box fuzzer that recognizes different combinations of execution information, and present several scalability challenges encountered while designing and implementing Ankou. Our experimental results show that Ankou is 1.94× and 8.0× more effective in finding bugs than AFL and Angora, respectively.
This paper surveys both the academic papers and the open-sourced tools in the field of fuzzing. We present a unified, general-purpose model to better understand the design and trade-offs of fuzzers.
Monolithic kernel is one of the prevalent configurations out of various kernel design models. While monolithic kernel excels in performance and management, they are unequipped forruntime system update; and this brings the need for kernel extension. Although kernel extensions are a convenient measure for system management, it is well established that they make the system prone to rootkit attacks and kernel exploitation as they share the single memory space with the rest of the kernel. To address this problem, various forms of isolation (e.g., making into a process), are so far proposed, yet their performance overhead is often too high or incompatible for a general purpose kernel. In this paper, we propose Domain Isolated Kernel (DIKernel), a new kernel architecture which securely isolates the untrustedkernel extensions with minimal performance overhead. DIKernel leverages hardware-based memory domain feature in ARM architecture; and prevents system manipulation attacks originated from kernel extensions, such as rootkits and exploits caused by buggy kernel extensions. We implemented DIKernel on top of Linux 4.13 kernel with 1500 LOC. Performance evaluation indicates that DIKernel imposes negligible overhead which is observed by cycle level microbenchmark.