Cloud Computing & Software Defined Infrastructure (+link)

  Software Defined Infrastructure (SDI) is an infrastructure where computing, networking, and storage resources are all virtualized and managed entirely by software. This infrastructure basically combines the concepts of Cloud Computing, Software Defined Networking (SDN), Network Function Virtualization (NFV) and Software Defined Storage (SDS). Virtualization technologies (server virtualization, network virtualization, and storage virtualization) and efficient management of their resources are central to implementing this environment. Followings are the research topics we are currently investigating in our lab:

- Virtualization techniques (CPU/Network/Storage)
- Hypervisors (Xen, VMWare, KVM, etc.) and Containers (Docker, etc)
- Resource provisioning, management, and optimization in cloud computing
- Performance optimization in virtualized cloud environments
- GPU virtualization and HPC over Cloud

- Scalable controller architecture and network resource optimization in SDN
- NFV architecture and service chaining mechanism

- Edge and Fog computing
- Open source platforms such as Openstack, OpenDaylight, Ceph Storage System

Software Supports for Multicore/Manycore Processors (+link)

  The rapid advance in semiconductor technology has created an opportunity so that the number of cores both in a single chip (multicore) and across the bus (manycore) keeps increasing dramatically. The recent introduction of Intel’s MIC technology (Xeon Phi) has also added a lot of complexity to the software running over these processors. In order to fully utilize this environment, the software (from OS and system software to application software) running over the multicore/manycore processors should be optimized to get the performance benefits provided by this powerful hardware. We are currently investigating the following research issues in our lab: 

- Linux scalability analysis over multicore architecture (NUMA)

- Improving Linux File System performance over NUMA architecture

- Performance optimization and parallelization over Xeon Phi and GP-GPU

- Scalable OS architecture and optimization for multicore/manycore processors

- Programming model for multicore/manycore processors

- Memory-based file system for multicore/manycore processors

- Storage architecture and OS support for SSD and NVMe




Software Platform for Big Data and Analytics (+link)

   As data is getting bigger, the software platform to handle and analyze data either in real-time or off-line has become a crucial component for big data processing. In order to adapt to the increased demands for processing the big data efficiently, several big data analytics platforms such as Hadoop and Spark have been proposed. Two important design challenges for building big data analytics platforms are how to design the platforms efficiently and how to scale them. We are currently evaluating various big data analytics platforms and design choices associated with them. We are also investigating algorithms and techniques to improve the performance and to scale the platform.  

- Scalability analysis and workload characterization of analytics engines
- Performance optimization of Hadoop and MapReduce
- Evaluation and scalability analysis of in-memory platform such as Spark
- Spark over multicore and NUMA architecture
- Big data processing using cloud, GPUs, and multicore CPUs
- I/O performance improvement 
- Key value storage
- In storage Processing
- Real-time and in-stream processing of big data

Autonomic Computing and Adaptive Systems (+link)

   Autonomic computing is a self-managing computing model inspired by the autonomous nervous system of the human body. This model not only overcomes the complexity of the IT infrastructure by managing the resources without human intervention but also makes the systems adaptive by dynamically changing their operations based on current situations. The autonomic systems basically include the following four functions: self-configuration, self-healing, self-optimization, and self-protection. They are also characterized by the following three features: automation, adaptivity, and awareness. Autonomic computing is one of the building blocks for pervasive environments such as Internet of Things (IoT). Followings are the research topics we are currently investigating in our lab:

- Autonomic management of cloud resources and services
- Workload characterization and prediction algorithms
- Software architecture and implementation of autonomic systems
- Component design for self-adaptation and reconfiguration
- Adaptable and reconfigurable protocol stack design
- Middleware for IOT environment
- IOT cloud and sensor cloud




  In addition to the research areas described above, we are also investigating the following issues.

 - Embedded software and operating systems (Linux)

 - Performance optimization of Android Platform

 - Communication support for high performance computing systems

 - Digital forensics techniques and algorithms

XE Login