Seungbeen Lee
M.S Student, Yonsei University

seungblee@yonsei.ac.kr
[E-mail] [Github] [Google Scholar]
Hello, I’m Seungbeen Lee, currently in my master’s degree at Yonsei University advised by Professor Youngjae Yu. I completed my undergraduate studies in Psychology
and Economics
in Yonsei University. I still love discussing about Personality Psychology, Social Psychology, and Game Theory. I’ve always been fascinated by modeling human decision-making, and I graduated with the insight that humans are beings most significantly influenced by the presence and decision-making of others.
I’m interested in developing agents that make decisions like humans. Importantly, I’m convinced that ai will be able to fulfill fundamental social needs of human
. Right now, they’re just slightly incapable, like the language models of the 2010s. I’m interested in these research topics:
AI to Meet Social Desires
I think about why AI can’t be a meaningful friend to humans yet. Friendship requires high level of detail
. I’m interested in developing agents that go beyond just making rational and safe responses - agents that can make humans laugh, feel joy, feel sadness, be moved, (sometimes) feel lonely, feel supported, view the world rationally, and see it emotionally. This will require highly sophisticated language abilities, and an easier approach might be an (adorably designed) embodied form. Just typing is not an easy way to make people feel immersed in the interaction.
Next State Prediction, Next Behavior Prediction
I read a paper and am so interested in predicting life events using probabilistic models. Career prediction is one aspect - career choice is one of the most important ‘probabilistic’ decision-making processes in our lives. Therefore, I believe that with a sophisticated probabilistic model and good data, we can predict human behavioral patterns
(Will they bow? Offer a handshake? Ignore?). I’m also interested in collecting and refining resources from various data sources like YouTube for this purpose.
Sophisticated Reward Model in AI Brain
While humans haven’t always evolved to be smarter, they have various reward models built into their brains for efficient survival and reproduction. For example, the human brain is designed to release comparable levels of dopamine when receiving social recognition compared to material recognition (ref). Most human brains weren’t designed to become smarter, and the conclusion from this long evolution is the importance of ‘sociability'
. This doesn’t mean IQ of 200. An IQ of around 90 is sufficient if one can read others’ emotional changes well and understand their needs - that’s enough to live well together. I’m interested in creating such sophisticated reward models for AI.
news
Feb 05, 2025 | Our work TRIAT is introduced in ScienceNews! Our paper TRAIT has been introduced in ScienceNews! The article highlights our novel approach to testing AI personalities through 8,000 scenario-based questions, revealing distinct patterns across different language models. Article. |
---|
Publications
- PreprintWait... Was That a Sign? Reading Minds Through Actions: Observable Theory of Mind with Nonverbal Cues2025
- PreprintConnecting the Dots from Data: LLM-driven Tree-search Career Cartographies as Your AI Career Explorer2025
- PreprintPersona Dynamics: Unveiling the Impact of Persona Traits on Agents in Text-Based Games2024
- EMNLP2024Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you!arXiv preprint arXiv:2410.01023, 2024
- EMNLP2024 FindingsCACTUS: towards psychological counseling conversations using cognitive behavioral theoryarXiv preprint arXiv:2407.03103, 2024