Keynote Speakers

 

Prof. Onur Mutlu

  • Title: Rethinking Memory/Storage System Design for Data-Intensive Computing

  • Abstract: The memory (or memory + storage) system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM and flash technologies are experiencing difficult technology scaling challenges that make the maintenance and enhancement of their capacity, energy-efficiency, and reliability significantly more costly with conventional techniques. In this talk, we examine some promising research and design directions to overcome challenges posed by memory scaling. Specifically, we discuss three key solution directions: 1) enabling new memory architectures, functions, interfaces, and better integration of the memory and the rest of the system, 2) designing a memory system that intelligently employs multiple memory technologies and coordinates memory and storage management using non-volatile memory technologies, 3) providing predictable performance and QoS to applications sharing the memory/storage system. If time permits, we may also briefly describe our ongoing related work in combating scaling challenges of NAND flash memory.

  • Biography: Dr. Onur Mutlu is the Dr. William D. and Nancy W. Strecker Early Career Professor at Carnegie Mellon University. His broader research interests are in computer architecture and systems, especially in the interactions between languages, operating systems, compilers, and microarchitecture. He enjoys teaching and researching problems in computer architecture, including problems related to the design of memory/storage systems, multi-core architectures, and scalable and efficient systems. He obtained his PhD and MS in ECE from the University of Texas at Austin (2006) and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. Prior to Carnegie Mellon, he worked at Microsoft Research (2006-2009), Intel Corporation, and Advanced Micro Devices. He was a recent recipient of the IEEE Computer Society Young Computer Architect Award, CMU College of Engineering George Tallman Ladd Research Award, Intel Early Career Faculty Honor Award, IBM Faculty Partnership Awards, HP Innovation Research Program Award, Microsoft Software Engineering Innovation Foundation Award, US National Science Foundation CAREER Award, best paper awards at ASPLOS, RTAS, VTS and ICCD, and a number of "computer architecture top pick" paper selections by the IEEE Micro magazine.

  • Prof. Keqin Li

  • Title: Power-Performance Tradeoff in a Federated Cloud

  • Abstract: For multiple heterogeneous multicore server processors across clouds and data centers, the aggregated performance of the cloud of clouds can be optimized by load distribution and balancing. Energy efficiency is one of the most important issues for large-scale server systems in current and future data centers. The multicore processor technology provides new levels of performance and energy efficiency. Our investigation aims to develop power and performance constrained load distribution methods for cloud computing in current and future large-scale data centers. In particular, we address the problem of optimal power allocation and load distribution for multiple heterogeneous multicore server processors across clouds and data centers. Our strategy is to formulate optimal power allocation and load distribution for multiple servers in a cloud of clouds as optimization problems, i.e., power constrained performance optimization and performance constrained power optimization. Our research problems in large-scale data centers are well defined multivariable optimization problems, which explore the power-performance tradeoff by fixing one factor and minimizing the other, from the perspective of optimal load distribution. It is clear that such power and performance optimization is important for a cloud computing provider to efficiently utilize all the available resources. We model a multicore server processor as a queueing system with multiple servers. Our optimization problems are solved for two different models of core speed, where one model assumes that a core runs at zero speed when it is idle, and the other model assumes that a core runs at a constant speed. Our results provide new theoretical insights into power management and performance optimization in data centers.

  • Biography: Dr. Keqin Li is a SUNY distinguished professor of computer science in the State University of New York at New Paltz. He is also a distinguished professor of Chinese National Recruitment Program of Global Experts (1000 Plan) at Hunan University and National Supercomputing Center in Changsha, China; and an Intellectual Ventures endowed visiting chair professor at the National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China. His current research interests include parallel computing, distributed computing, energy-efficient computing and communication, heterogeneous computing systems, wireless communication, sensor networks, peer-to-peer file systems, cloud computing, big data computing, storage and file systems, CPU-GPU hybrid and cooperative computing, multicore computing, service computing, Internet of things and cyber-physical systems, bioinformatics, signal processing, uncertain databases, soft computing, network security. He has published over 290 journal articles, book chapters, and research papers in refereed international conference proceedings. He has received several Best Paper Awards for his highest quality work. He has served in various capacities for numerous international conferences as general chair, program chair, workshop chair, track chair, and steering/ advisory/award/program committee member. He is currently or has served on the editorial board of IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Computers, Journal of Parallel and Distributed Computing, International Journal of Parallel, Emergent and Distributed Systems, International Journal of High Performance Computing and Networking, Optimization Letters, and International Journal of Big Data Intelligence.