research

My current brief research statement.

My current research focuses on human-like learning systems, artificial intelligence, concept learning, human categorization, and computational cognitive science in general.

I’m currently a master’s student in the Computational Science and Engineering (CSE) program at Georgia Tech, and I work closely with Chris MacLellan about human-like learning systems and Sashank Varma with computational cognitive science. Here are my current research interests and what I’ve been doing on the academic path:

  • Human-like Learning. A human-like learning system might capture one or more of the following human learning features: Modular, Composed during performance, Relational, Piecemeal, Incremental, Guided by knowledge, and Refined rapidly. I’m now grateful to do research with Chris MacLellan on Cobweb, an incremental human-like concept categorization system.
    • Working with Chris, my research focuses on Cobweb, an incremental concept formation and unsupervised learning system across streams of examples, and we believe Cobweb exhibits the potential to learn without catastrophic forgetting. Collaborating with Nicki Barari, our work extended into Cobweb/4V, a vision learning framework built upon Cobweb’s principles, specifically designed to process matrix-wise data like images. This innovative system excels in vision tasks, demonstrating resilience against catastrophic forgetting effects.
    • Further, Chris and I have been exploring more possibilities in the system by testing different assumptions and the inductions behind the mechanism. We recently explored connections between the Rational Model of Categorization (or in general, the rational models of categorization) and Cobweb to integrate rational analysis into its learning and categorization processes. Despite this change, we’ve established some new adjustments of Cobweb and reached very commendable results in natural language tasks.
  • Categorization and Concept Learning. In addition to human-like learning systems, I am enthusiastic about categorization models and category learning. In understanding human learning, on the one hand, I am dedicated to building concept learning systems that require modest data only to learn, which are grounded by knowledge-based AI algorithms and categorization models including rule-based and similarity-based models. On the other hand, I observe the notion that humans acquire concepts or object features by absorbing all available information. My hypothesis suggests the existence of pivotal discriminant features crucial to concept and object learning.
    • Building upon the established connections between rational analysis within categorization models and Cobweb, in collaboration with Sashank, our focus extends to examining its alignment with the rational categorization model and broader similarity-based models. This investigation seeks to unveil its potential as a prototype model for “learning in exemplars”. Furthermore, our research involves thorough evaluations of the revised system using human data, emphasizing its strong alignment with human categorization processes.
    • As part of my current projects, I integrate the CHAI (Continual Hierarchical Adaption for Inference) social interaction framework into Cobweb. This integration facilitates object learning through nominal utterance pairs, allowing subsequent referencing of learned objects via partial utterance descriptions, resembling a reference game. This innovative method diminishes the necessary utterances or attribute-value pairs to reference an object, priori- tizing the most discriminant dimensions within the learned Cobweb system. Moreover, our investigation involves their correlation with basic-level concepts.
  • Cognitive Psychology, Knowledge-Based AI. Further, I have a profound interest in cognitive psychology, computational psychology, and cog- nitive modeling. Additionally, I harbor a keen interest in exploring alternative AI algorithms, particularly those that “learn the world”, need minimal data inputs, and might be grounded on knowledge-based AI fundamentals including inductive learning, analogical learning, etc.

I want to be one of the people who are devoted to studying human-like learning systems and promoting them in academia. I really love the claim here: Intelligence is not just about pattern cognition and function approximation (which is about modeling data) - it is about modeling the world and ourselves in it, and using the models to support inference and action to service of our goals, which is aligned with the philosophy of my study - intelligence is not about feeding data - it’s about the knowledge gained and agents’ ontology. I am cool and proud to be part of the minority in this realm of AI research nowadays. Being different has always motivated me, from childhood creativity to my atypical research direction. This genuine intellectual interest drives me.

I take pride in being one of the few proponents in the field of AI who advocate for de-dataized agents, theories grounded in cognitive science, and the development of more human-like learning systems. My goal is to offer alternatives to the prevalent use of large deep learning models in AI systems, with a focus on capturing human learning characteristics, leveraging the fundamentals of cognitive science, computational psychology, information theory, and classical AI principles. My search for a graduate program is not solely about advancing my own studies; it is also about finding a platform to promote my vision of human-like learning systems.

References

[1] Pat Langley. The computational gauntlet of human-like learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12268–12273, 2022.

[2] Douglas H Fisher. Knowledge acquisition via incremental conceptual clustering. Machine learning, 2:139–172, 1987.

[3] John R Anderson. The adaptive nature of human categorization. Psychological review, 98(3):409, 1991.

[4] Thomas L Griffiths, Adam N Sanborn, Kevin R Canini, and Daniel J Navarro. Categorization as nonparametric bayesian density estimation. The probabilistic mind: Prospects for Bayesian cognitive science, pages 303–328, 2008.

[5] Robert D Hawkins, Michael Franke, Michael C Frank, Adele E Goldberg, Kenny Smith, Thomas L Griffiths, and Noah D Goodman. From partners to populations: A hierarchical bayesian account of coordination and convention. Psychological Re- view, 130(4):977, 2023.

[6] James E Corter and Mark A Gluck. Explaining basic categories: Feature pre- dictability and information. Psychological bulletin, 111(2):291, 1992.

[7] Pat Langley. Elements of machine learning. Morgan Kaufmann, 1996.

[8] Gregory Murphy. The big book of concepts. MIT press, 2004.

[9] Paul Thagard. Mind: Introduction to cog- nitive science. MIT press, 2005.

[10] Ron Sun. The Cambridge handbook of com- putational psychology. Cambridge University Press, 2008.

[11] Stuart J Russell and Peter Norvig. Artifi- cial intelligence a modern approach. London, 2010.

[12] Ron Sun. The Cambridge Handbook of Computational Cognitive Sciences. Cambridge University Press, 2023.

[13] Noah D Goodman, Joshua B Tenenbaum, Jacob Feldman, and Thomas L Griffiths. A rational analysis of rule-based concept learning. Cognitive science, 32(1):108–154, 2008.

[14] Ishita Dasgupta, Erin Grant, and Tom Grif- fiths. Distinguishing rule and exemplar- based generalization in learning systems. In International Conference on Machine Learning, pages 4816–4830. PMLR, 2022.

[15] Josh Tenenbaum. Learning to see the human way, Feburary 2023. Lecture presented at AAAI 2023, Washington, United States, Feb 13, 2023. Available from: https://underline.io/70633-learning- to-see-the-human-way.