I am a fifth year PhD student in the Machine Learning Department of the School of Computer Science at Carnegie Mellon University. My advisor is Tom Mitchell and I work on Never-Ending Learning. My research revolves around the idea that learning may be an agreement-driven process. I look at learning within the context of a multi-agent system where agents learn by trying to agree with other agents they trust. This has connections to game theory, optimization, and other technical fields, but it also has some interesting philosophical implications. Is there some underlying truth that we are trying to get to by learning? If yes, is there a fundamental limit in how much of that truth we can ever get to learn? And perhaps more importantly, what useful applications are there for the answers that we may find?

Previously, I did some research around the idea of self-reflection in the context of machine learning. More specifically, I developed a framework that allowed learning systems that leverage several different learning mechanisms (i.e. algorithms) to “understand” how well each one of those mechanisms performs in different domains (also related to the notion of truth mentioned in the previous paragraph). It also allow edthem to improve their learning rate by taking advantage of the “expertise” of each learning mechanism and by directing each such mechanism to learn more efficiently.

Before I joined CMU, I graduated with an M.Eng. in Electrical and Electronic Engineering from Imperial College London. For my Master’s thesis I proposed a way to use topic modelling methods in order to perform human motion classification.

I am from Athens, Greece!