research
My current brief research statement.
In general, my research is dedicated to developing more human-like AI systems. I aim to replicate key strengths of human learning, such as efficient data usage, incremental and conceptual learning, the construction of modular and well-organized conceptual knowledge, adaptability to diverse tasks, and the ability to perform robust reasoning and inference. While neural models have demonstrated remarkable success, they often face challenges in achieving human-like learning and reasoning. To address these limitations, I explore alternatives such as weak methods, including traditional machine learning approaches, and cognitive architectures, as promising paths forward.
Under Ken, my current research leverages the cognitive architecture Companion, which we view as a framework for building software social organisms—a critical step toward genuinely human-like AI. One focus of my work explores Companion’s pathway to disintermediation: enabling large language models (LLMs) to directly generate predicate calculus representations (semantic interpretations) for Companion from natural language inputs, bypassing intermediate processes in the Companion Natural Language Understanding system. My broader research centers on learning by reading with Companion, which involves processing natural language text into forms suitable for reasoning, with applications such as word sense disambiguation.
Before that, I worked with Chris MacLellan on human-like learning systems, particularly Cobweb, demonstrating its capabilities under various extensions and applications:
- Rapid learning.
- Incremental learning without catastrophic forgetting. Together with Nicki Barari, we extended Cobweb to Cobweb/4V, incorporating vision learning and performance tasks. This system achieves rapid learning, performs comparably to neural networks, and resists catastrophic forgetting. Associated publication.
- Scalability. Working with Zekun Wang, we developed Cobweb/4L, which supports masked word prediction tasks. This extension introduces a novel multi-node prediction mechanism, achieving comparable performance to transformers in low-data settings, superior robustness, and adaptability across data scales without hyperparameter tuning. Associated publication (in review - can check out previous publication).
- Human category learning. Working with Sashank Varma, I explored Cobweb’s potential as a model of human category learning. Our work demonstrates its alignment with classical human category learning effects and its flexibility to exhibit both exemplar and prototype learning using its unique tree-like hierarchy. Associated publication.